Test Report: Docker_Linux_crio_arm64 21767

                    
                      792b73f7e6a323c75f1a3ad863987d7e01fd8059:2025-10-25:42055
                    
                

Test fail (39/326)

Order failed test Duration
29 TestAddons/serial/Volcano 0.32
35 TestAddons/parallel/Registry 16.01
36 TestAddons/parallel/RegistryCreds 0.61
37 TestAddons/parallel/Ingress 143.4
38 TestAddons/parallel/InspektorGadget 6.27
39 TestAddons/parallel/MetricsServer 5.39
41 TestAddons/parallel/CSI 36.27
42 TestAddons/parallel/Headlamp 3.72
43 TestAddons/parallel/CloudSpanner 6.37
44 TestAddons/parallel/LocalPath 10.42
45 TestAddons/parallel/NvidiaDevicePlugin 5.31
46 TestAddons/parallel/Yakd 6.26
97 TestFunctional/parallel/ServiceCmdConnect 603.58
125 TestFunctional/parallel/ServiceCmd/DeployApp 600.9
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.48
135 TestFunctional/parallel/ServiceCmd/Format 0.65
136 TestFunctional/parallel/ServiceCmd/URL 0.58
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.34
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.44
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.22
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 535.36
174 TestMultiControlPlane/serial/DeleteSecondaryNode 8.4
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 3.43
190 TestJSONOutput/pause/Command 1.98
196 TestJSONOutput/unpause/Command 1.85
280 TestPause/serial/Pause 7.88
295 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.81
302 TestStartStop/group/old-k8s-version/serial/Pause 6.44
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.51
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.63
320 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.04
326 TestStartStop/group/embed-certs/serial/Pause 8.84
330 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 3.54
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.54
342 TestStartStop/group/newest-cni/serial/Pause 7.7
347 TestStartStop/group/no-preload/serial/Pause 7.54
x
+
TestAddons/serial/Volcano (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-184548 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-184548 addons disable volcano --alsologtostderr -v=1: exit status 11 (323.940827ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:49:20.555179  267938 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:49:20.556633  267938 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:49:20.556650  267938 out.go:374] Setting ErrFile to fd 2...
	I1025 09:49:20.556656  267938 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:49:20.556952  267938 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 09:49:20.557301  267938 mustload.go:65] Loading cluster: addons-184548
	I1025 09:49:20.557760  267938 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:49:20.557788  267938 addons.go:606] checking whether the cluster is paused
	I1025 09:49:20.557899  267938 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:49:20.557916  267938 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:49:20.559344  267938 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:49:20.582818  267938 ssh_runner.go:195] Run: systemctl --version
	I1025 09:49:20.582905  267938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:49:20.600804  267938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:49:20.704707  267938 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:49:20.704821  267938 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:49:20.743932  267938 cri.go:89] found id: "99ffce70564e9c1e93fb342fe28b7db09d0ba5ccdb15aff7acaf74c0c4c835a8"
	I1025 09:49:20.743952  267938 cri.go:89] found id: "87380913bd0ee1d3b726e240b0c37928db5773727a6e395762d7457cbd38d4d2"
	I1025 09:49:20.743957  267938 cri.go:89] found id: "917acafd898799b4997b73c940a4c49b9b4b9b1ec7117130db5523c09dcb1f37"
	I1025 09:49:20.743961  267938 cri.go:89] found id: "1511789ef92ecf3b8cb712e504e6d6e07218a8ddbec83e34c275e03a4dd81a60"
	I1025 09:49:20.743964  267938 cri.go:89] found id: "fe07be820f29e285303c3f92cd4907117cf3455093a7b79460151610fb5be848"
	I1025 09:49:20.743968  267938 cri.go:89] found id: "353ca1bd958555fff3e868153751459895dcc0e6530c26255c2e4d9e9f3521c5"
	I1025 09:49:20.743971  267938 cri.go:89] found id: "5c51c82f3805654c2a671fc37a2f5c91193162d6105f71af29ec775fb4947a4b"
	I1025 09:49:20.743974  267938 cri.go:89] found id: "d4fb23cf89dec985ddc583a2f99487e957378de0e274adf63ef6bd7ff30d4fbc"
	I1025 09:49:20.743978  267938 cri.go:89] found id: "6064a19490c795de01aeb924e16a842c44be71ff72d249ea57153da7eba1f9fc"
	I1025 09:49:20.743985  267938 cri.go:89] found id: "7c76fe020b3e5b813393dcdeb41b7c271ca356c0349ac3bb6a9b60ce92ee634e"
	I1025 09:49:20.743988  267938 cri.go:89] found id: "7e5a14fca747b468e6ead675759447325ff2660970cd57b772a9fd9fb511fe97"
	I1025 09:49:20.743991  267938 cri.go:89] found id: "9e7e539bbba9851c6b21ddbda7ab05096ff12b3ea52de3fb97926bd8e4354471"
	I1025 09:49:20.743994  267938 cri.go:89] found id: "0a744f4822b1c8196d03b052451e35d3752f739e87a78edb3ff880ada3a62caa"
	I1025 09:49:20.743998  267938 cri.go:89] found id: "97501aeea889609e1bc6cb13a189bcdb0e013cc980a2ff11666a65f22b076e9f"
	I1025 09:49:20.744006  267938 cri.go:89] found id: "fb43fde6f708119239fb4fc0feffd400afbd4640cb335ed22cdbe5b461afc1f8"
	I1025 09:49:20.744012  267938 cri.go:89] found id: "01b285afd58548299d42eb8fb367c019acd34eedf67eecee676c84578ed69e6f"
	I1025 09:49:20.744017  267938 cri.go:89] found id: "6dea6b6abd8b243565d3414c3c5a16625b0cad5dc890e522796ba0eb33521eeb"
	I1025 09:49:20.744022  267938 cri.go:89] found id: "ea0a2c59127ed1de3b2cc34db8da6794752aca3f85f3ca50dc4bbad88b24d692"
	I1025 09:49:20.744025  267938 cri.go:89] found id: "7c269e9ecf5ba5e7cc6f9e493dbabf1e77ceb647f08b3a5c353d7a702b6a7ced"
	I1025 09:49:20.744028  267938 cri.go:89] found id: "a9c067b0e9c58b7946603df485733ff618356cc6b10fde0fb7d121b3320d665a"
	I1025 09:49:20.744032  267938 cri.go:89] found id: "50b3905935f0c697a82075726d26cb056a12edbb7de740fac083e74c72edde90"
	I1025 09:49:20.744036  267938 cri.go:89] found id: "fc1be8cbffe437e2dcdce4f9f5911e6c06e22826e143a64d41168e9a51ae5c09"
	I1025 09:49:20.744039  267938 cri.go:89] found id: "703663e8a09cce37f7cec1e00ab2f874bfe4b4e49042f73b80b547840b775ef2"
	I1025 09:49:20.744042  267938 cri.go:89] found id: ""
	I1025 09:49:20.744093  267938 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:49:20.759152  267938 out.go:203] 
	W1025 09:49:20.762107  267938 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:49:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:49:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:49:20.762143  267938 out.go:285] * 
	* 
	W1025 09:49:20.767862  267938 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:49:20.770891  267938 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-184548 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.32s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 14.068591ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-cft48" [3b3f9d6f-cbd8-4b92-987f-b61c282e6860] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003640951s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-l4vs6" [2d113e11-a239-4418-8da7-40a53e33fd75] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005158999s
addons_test.go:392: (dbg) Run:  kubectl --context addons-184548 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-184548 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-184548 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.41602719s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-184548 ip
2025/10/25 09:49:46 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-184548 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-184548 addons disable registry --alsologtostderr -v=1: exit status 11 (286.312915ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:49:46.914398  268474 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:49:46.915342  268474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:49:46.915361  268474 out.go:374] Setting ErrFile to fd 2...
	I1025 09:49:46.915366  268474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:49:46.915661  268474 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 09:49:46.915975  268474 mustload.go:65] Loading cluster: addons-184548
	I1025 09:49:46.916350  268474 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:49:46.916375  268474 addons.go:606] checking whether the cluster is paused
	I1025 09:49:46.916486  268474 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:49:46.916502  268474 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:49:46.916961  268474 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:49:46.934923  268474 ssh_runner.go:195] Run: systemctl --version
	I1025 09:49:46.934982  268474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:49:46.952479  268474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:49:47.060978  268474 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:49:47.061077  268474 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:49:47.100858  268474 cri.go:89] found id: "99ffce70564e9c1e93fb342fe28b7db09d0ba5ccdb15aff7acaf74c0c4c835a8"
	I1025 09:49:47.100891  268474 cri.go:89] found id: "87380913bd0ee1d3b726e240b0c37928db5773727a6e395762d7457cbd38d4d2"
	I1025 09:49:47.100901  268474 cri.go:89] found id: "917acafd898799b4997b73c940a4c49b9b4b9b1ec7117130db5523c09dcb1f37"
	I1025 09:49:47.100906  268474 cri.go:89] found id: "1511789ef92ecf3b8cb712e504e6d6e07218a8ddbec83e34c275e03a4dd81a60"
	I1025 09:49:47.100910  268474 cri.go:89] found id: "fe07be820f29e285303c3f92cd4907117cf3455093a7b79460151610fb5be848"
	I1025 09:49:47.100914  268474 cri.go:89] found id: "353ca1bd958555fff3e868153751459895dcc0e6530c26255c2e4d9e9f3521c5"
	I1025 09:49:47.100917  268474 cri.go:89] found id: "5c51c82f3805654c2a671fc37a2f5c91193162d6105f71af29ec775fb4947a4b"
	I1025 09:49:47.100920  268474 cri.go:89] found id: "d4fb23cf89dec985ddc583a2f99487e957378de0e274adf63ef6bd7ff30d4fbc"
	I1025 09:49:47.100923  268474 cri.go:89] found id: "6064a19490c795de01aeb924e16a842c44be71ff72d249ea57153da7eba1f9fc"
	I1025 09:49:47.100939  268474 cri.go:89] found id: "7c76fe020b3e5b813393dcdeb41b7c271ca356c0349ac3bb6a9b60ce92ee634e"
	I1025 09:49:47.100945  268474 cri.go:89] found id: "7e5a14fca747b468e6ead675759447325ff2660970cd57b772a9fd9fb511fe97"
	I1025 09:49:47.100952  268474 cri.go:89] found id: "9e7e539bbba9851c6b21ddbda7ab05096ff12b3ea52de3fb97926bd8e4354471"
	I1025 09:49:47.100956  268474 cri.go:89] found id: "0a744f4822b1c8196d03b052451e35d3752f739e87a78edb3ff880ada3a62caa"
	I1025 09:49:47.100959  268474 cri.go:89] found id: "97501aeea889609e1bc6cb13a189bcdb0e013cc980a2ff11666a65f22b076e9f"
	I1025 09:49:47.100962  268474 cri.go:89] found id: "fb43fde6f708119239fb4fc0feffd400afbd4640cb335ed22cdbe5b461afc1f8"
	I1025 09:49:47.100974  268474 cri.go:89] found id: "01b285afd58548299d42eb8fb367c019acd34eedf67eecee676c84578ed69e6f"
	I1025 09:49:47.100985  268474 cri.go:89] found id: "6dea6b6abd8b243565d3414c3c5a16625b0cad5dc890e522796ba0eb33521eeb"
	I1025 09:49:47.100992  268474 cri.go:89] found id: "ea0a2c59127ed1de3b2cc34db8da6794752aca3f85f3ca50dc4bbad88b24d692"
	I1025 09:49:47.100995  268474 cri.go:89] found id: "7c269e9ecf5ba5e7cc6f9e493dbabf1e77ceb647f08b3a5c353d7a702b6a7ced"
	I1025 09:49:47.101002  268474 cri.go:89] found id: "a9c067b0e9c58b7946603df485733ff618356cc6b10fde0fb7d121b3320d665a"
	I1025 09:49:47.101007  268474 cri.go:89] found id: "50b3905935f0c697a82075726d26cb056a12edbb7de740fac083e74c72edde90"
	I1025 09:49:47.101022  268474 cri.go:89] found id: "fc1be8cbffe437e2dcdce4f9f5911e6c06e22826e143a64d41168e9a51ae5c09"
	I1025 09:49:47.101036  268474 cri.go:89] found id: "703663e8a09cce37f7cec1e00ab2f874bfe4b4e49042f73b80b547840b775ef2"
	I1025 09:49:47.101045  268474 cri.go:89] found id: ""
	I1025 09:49:47.101115  268474 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:49:47.121374  268474 out.go:203] 
	W1025 09:49:47.124366  268474 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:49:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:49:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:49:47.124415  268474 out.go:285] * 
	* 
	W1025 09:49:47.129612  268474 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:49:47.132844  268474 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-184548 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (16.01s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.61s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.376129ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-184548
addons_test.go:332: (dbg) Run:  kubectl --context addons-184548 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-184548 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-184548 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (274.916407ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:50:30.183303  270459 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:50:30.187180  270459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:50:30.187201  270459 out.go:374] Setting ErrFile to fd 2...
	I1025 09:50:30.187207  270459 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:50:30.187511  270459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 09:50:30.187860  270459 mustload.go:65] Loading cluster: addons-184548
	I1025 09:50:30.188268  270459 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:50:30.188285  270459 addons.go:606] checking whether the cluster is paused
	I1025 09:50:30.188390  270459 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:50:30.188400  270459 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:50:30.188922  270459 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:50:30.208964  270459 ssh_runner.go:195] Run: systemctl --version
	I1025 09:50:30.209037  270459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:50:30.229313  270459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:50:30.336789  270459 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:50:30.336878  270459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:50:30.366094  270459 cri.go:89] found id: "99ffce70564e9c1e93fb342fe28b7db09d0ba5ccdb15aff7acaf74c0c4c835a8"
	I1025 09:50:30.366116  270459 cri.go:89] found id: "87380913bd0ee1d3b726e240b0c37928db5773727a6e395762d7457cbd38d4d2"
	I1025 09:50:30.366122  270459 cri.go:89] found id: "917acafd898799b4997b73c940a4c49b9b4b9b1ec7117130db5523c09dcb1f37"
	I1025 09:50:30.366131  270459 cri.go:89] found id: "1511789ef92ecf3b8cb712e504e6d6e07218a8ddbec83e34c275e03a4dd81a60"
	I1025 09:50:30.366136  270459 cri.go:89] found id: "fe07be820f29e285303c3f92cd4907117cf3455093a7b79460151610fb5be848"
	I1025 09:50:30.366139  270459 cri.go:89] found id: "353ca1bd958555fff3e868153751459895dcc0e6530c26255c2e4d9e9f3521c5"
	I1025 09:50:30.366143  270459 cri.go:89] found id: "5c51c82f3805654c2a671fc37a2f5c91193162d6105f71af29ec775fb4947a4b"
	I1025 09:50:30.366146  270459 cri.go:89] found id: "d4fb23cf89dec985ddc583a2f99487e957378de0e274adf63ef6bd7ff30d4fbc"
	I1025 09:50:30.366149  270459 cri.go:89] found id: "6064a19490c795de01aeb924e16a842c44be71ff72d249ea57153da7eba1f9fc"
	I1025 09:50:30.366155  270459 cri.go:89] found id: "7c76fe020b3e5b813393dcdeb41b7c271ca356c0349ac3bb6a9b60ce92ee634e"
	I1025 09:50:30.366159  270459 cri.go:89] found id: "7e5a14fca747b468e6ead675759447325ff2660970cd57b772a9fd9fb511fe97"
	I1025 09:50:30.366163  270459 cri.go:89] found id: "9e7e539bbba9851c6b21ddbda7ab05096ff12b3ea52de3fb97926bd8e4354471"
	I1025 09:50:30.366166  270459 cri.go:89] found id: "0a744f4822b1c8196d03b052451e35d3752f739e87a78edb3ff880ada3a62caa"
	I1025 09:50:30.366170  270459 cri.go:89] found id: "97501aeea889609e1bc6cb13a189bcdb0e013cc980a2ff11666a65f22b076e9f"
	I1025 09:50:30.366173  270459 cri.go:89] found id: "fb43fde6f708119239fb4fc0feffd400afbd4640cb335ed22cdbe5b461afc1f8"
	I1025 09:50:30.366178  270459 cri.go:89] found id: "01b285afd58548299d42eb8fb367c019acd34eedf67eecee676c84578ed69e6f"
	I1025 09:50:30.366181  270459 cri.go:89] found id: "6dea6b6abd8b243565d3414c3c5a16625b0cad5dc890e522796ba0eb33521eeb"
	I1025 09:50:30.366187  270459 cri.go:89] found id: "ea0a2c59127ed1de3b2cc34db8da6794752aca3f85f3ca50dc4bbad88b24d692"
	I1025 09:50:30.366190  270459 cri.go:89] found id: "7c269e9ecf5ba5e7cc6f9e493dbabf1e77ceb647f08b3a5c353d7a702b6a7ced"
	I1025 09:50:30.366193  270459 cri.go:89] found id: "a9c067b0e9c58b7946603df485733ff618356cc6b10fde0fb7d121b3320d665a"
	I1025 09:50:30.366198  270459 cri.go:89] found id: "50b3905935f0c697a82075726d26cb056a12edbb7de740fac083e74c72edde90"
	I1025 09:50:30.366205  270459 cri.go:89] found id: "fc1be8cbffe437e2dcdce4f9f5911e6c06e22826e143a64d41168e9a51ae5c09"
	I1025 09:50:30.366208  270459 cri.go:89] found id: "703663e8a09cce37f7cec1e00ab2f874bfe4b4e49042f73b80b547840b775ef2"
	I1025 09:50:30.366211  270459 cri.go:89] found id: ""
	I1025 09:50:30.366267  270459 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:50:30.381854  270459 out.go:203] 
	W1025 09:50:30.384804  270459 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:50:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:50:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:50:30.384837  270459 out.go:285] * 
	* 
	W1025 09:50:30.389919  270459 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:50:30.392933  270459 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-184548 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.61s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (143.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-184548 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-184548 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-184548 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [0d9a4cc9-d129-40d7-afe1-17132f28ed0c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [0d9a4cc9-d129-40d7-afe1-17132f28ed0c] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.004939658s
I1025 09:50:17.153136  261256 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-184548 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-184548 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.499073678s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-184548 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-184548 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-184548
helpers_test.go:243: (dbg) docker inspect addons-184548:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d746aa6cc56e4e44363f4ec763338bfeda3edf0d6d5944949e8319800c322ffa",
	        "Created": "2025-10-25T09:46:43.864888409Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 262403,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:46:43.925349297Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/d746aa6cc56e4e44363f4ec763338bfeda3edf0d6d5944949e8319800c322ffa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d746aa6cc56e4e44363f4ec763338bfeda3edf0d6d5944949e8319800c322ffa/hostname",
	        "HostsPath": "/var/lib/docker/containers/d746aa6cc56e4e44363f4ec763338bfeda3edf0d6d5944949e8319800c322ffa/hosts",
	        "LogPath": "/var/lib/docker/containers/d746aa6cc56e4e44363f4ec763338bfeda3edf0d6d5944949e8319800c322ffa/d746aa6cc56e4e44363f4ec763338bfeda3edf0d6d5944949e8319800c322ffa-json.log",
	        "Name": "/addons-184548",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-184548:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-184548",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d746aa6cc56e4e44363f4ec763338bfeda3edf0d6d5944949e8319800c322ffa",
	                "LowerDir": "/var/lib/docker/overlay2/70a2730a7c6d8a28c641099609d27ac2418e31332416ad60480de8113ee47513-init/diff:/var/lib/docker/overlay2/a746402fafa86965bd9394d930bb96ec16982037f9f5e7f4f1de6d09576ff851/diff",
	                "MergedDir": "/var/lib/docker/overlay2/70a2730a7c6d8a28c641099609d27ac2418e31332416ad60480de8113ee47513/merged",
	                "UpperDir": "/var/lib/docker/overlay2/70a2730a7c6d8a28c641099609d27ac2418e31332416ad60480de8113ee47513/diff",
	                "WorkDir": "/var/lib/docker/overlay2/70a2730a7c6d8a28c641099609d27ac2418e31332416ad60480de8113ee47513/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-184548",
	                "Source": "/var/lib/docker/volumes/addons-184548/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-184548",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-184548",
	                "name.minikube.sigs.k8s.io": "addons-184548",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1f4b1502031e199b68d3ceebdd2c1ed9f60c627fb314ed5892653a598b960c8b",
	            "SandboxKey": "/var/run/docker/netns/1f4b1502031e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-184548": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:90:e3:f2:e5:cc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8d8057e9a9ef0fb708e302fb11c8c51feb3894af3ea427677c9c6034fe8ed2ba",
	                    "EndpointID": "614a9a5549e8d82f9b7f8c5c5fbb79a6845a9ec993e865a980c8bb97a67b310b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-184548",
	                        "d746aa6cc56e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-184548 -n addons-184548
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-184548 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-184548 logs -n 25: (1.562750539s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-540570                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-540570 │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:46 UTC │
	│ start   │ --download-only -p binary-mirror-439045 --alsologtostderr --binary-mirror http://127.0.0.1:39931 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-439045   │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │                     │
	│ delete  │ -p binary-mirror-439045                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-439045   │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:46 UTC │
	│ addons  │ enable dashboard -p addons-184548                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │                     │
	│ addons  │ disable dashboard -p addons-184548                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │                     │
	│ start   │ -p addons-184548 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:49 UTC │
	│ addons  │ addons-184548 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:49 UTC │                     │
	│ addons  │ addons-184548 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:49 UTC │                     │
	│ addons  │ addons-184548 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:49 UTC │                     │
	│ addons  │ addons-184548 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:49 UTC │                     │
	│ ip      │ addons-184548 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:49 UTC │ 25 Oct 25 09:49 UTC │
	│ addons  │ addons-184548 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:49 UTC │                     │
	│ ssh     │ addons-184548 ssh cat /opt/local-path-provisioner/pvc-d640ae38-7d8c-48d1-83d2-821c86fec4ed_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:49 UTC │ 25 Oct 25 09:49 UTC │
	│ addons  │ addons-184548 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:49 UTC │                     │
	│ addons  │ enable headlamp -p addons-184548 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:49 UTC │                     │
	│ addons  │ addons-184548 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:49 UTC │                     │
	│ addons  │ addons-184548 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:49 UTC │                     │
	│ addons  │ addons-184548 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:50 UTC │                     │
	│ addons  │ addons-184548 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:50 UTC │                     │
	│ ssh     │ addons-184548 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:50 UTC │                     │
	│ addons  │ addons-184548 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:50 UTC │                     │
	│ addons  │ addons-184548 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:50 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-184548                                                                                                                                                                                                                                                                                                                                                                                           │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:50 UTC │ 25 Oct 25 09:50 UTC │
	│ addons  │ addons-184548 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:50 UTC │                     │
	│ ip      │ addons-184548 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:52 UTC │ 25 Oct 25 09:52 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:46:17
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:46:17.798034  262001 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:46:17.798150  262001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:46:17.798161  262001 out.go:374] Setting ErrFile to fd 2...
	I1025 09:46:17.798167  262001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:46:17.798443  262001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 09:46:17.798921  262001 out.go:368] Setting JSON to false
	I1025 09:46:17.799739  262001 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5329,"bootTime":1761380249,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 09:46:17.799813  262001 start.go:141] virtualization:  
	I1025 09:46:17.803169  262001 out.go:179] * [addons-184548] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:46:17.806898  262001 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 09:46:17.806997  262001 notify.go:220] Checking for updates...
	I1025 09:46:17.812937  262001 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:46:17.815822  262001 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 09:46:17.818629  262001 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 09:46:17.821549  262001 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 09:46:17.824451  262001 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:46:17.827632  262001 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:46:17.851753  262001 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:46:17.851890  262001 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:46:17.912370  262001 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-25 09:46:17.902522334 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:46:17.912475  262001 docker.go:318] overlay module found
	I1025 09:46:17.915659  262001 out.go:179] * Using the docker driver based on user configuration
	I1025 09:46:17.918475  262001 start.go:305] selected driver: docker
	I1025 09:46:17.918496  262001 start.go:925] validating driver "docker" against <nil>
	I1025 09:46:17.918511  262001 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:46:17.919230  262001 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:46:17.971680  262001 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-25 09:46:17.962443717 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:46:17.971837  262001 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:46:17.972084  262001 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:46:17.974996  262001 out.go:179] * Using Docker driver with root privileges
	I1025 09:46:17.978124  262001 cni.go:84] Creating CNI manager for ""
	I1025 09:46:17.978191  262001 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:46:17.978201  262001 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:46:17.978290  262001 start.go:349] cluster config:
	{Name:addons-184548 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-184548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1025 09:46:17.981612  262001 out.go:179] * Starting "addons-184548" primary control-plane node in "addons-184548" cluster
	I1025 09:46:17.984581  262001 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:46:17.987767  262001 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:46:17.990713  262001 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:46:17.990970  262001 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:46:17.991013  262001 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 09:46:17.991033  262001 cache.go:58] Caching tarball of preloaded images
	I1025 09:46:17.991111  262001 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 09:46:17.991125  262001 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:46:17.991469  262001 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/config.json ...
	I1025 09:46:17.991498  262001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/config.json: {Name:mk36831340e80edd5b284df694d7fb9085ffb2c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:18.019742  262001 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 09:46:18.019900  262001 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1025 09:46:18.019928  262001 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1025 09:46:18.019937  262001 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1025 09:46:18.019946  262001 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1025 09:46:18.019951  262001 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1025 09:46:35.926538  262001 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1025 09:46:35.926580  262001 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:46:35.926626  262001 start.go:360] acquireMachinesLock for addons-184548: {Name:mkee07b743b61356246760cb6ca511eba06d1efd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:46:35.926739  262001 start.go:364] duration metric: took 89.002µs to acquireMachinesLock for "addons-184548"
	I1025 09:46:35.926771  262001 start.go:93] Provisioning new machine with config: &{Name:addons-184548 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-184548 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:46:35.926857  262001 start.go:125] createHost starting for "" (driver="docker")
	I1025 09:46:35.930325  262001 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1025 09:46:35.930578  262001 start.go:159] libmachine.API.Create for "addons-184548" (driver="docker")
	I1025 09:46:35.930615  262001 client.go:168] LocalClient.Create starting
	I1025 09:46:35.930740  262001 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem
	I1025 09:46:36.530284  262001 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem
	I1025 09:46:37.079619  262001 cli_runner.go:164] Run: docker network inspect addons-184548 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:46:37.097254  262001 cli_runner.go:211] docker network inspect addons-184548 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:46:37.097370  262001 network_create.go:284] running [docker network inspect addons-184548] to gather additional debugging logs...
	I1025 09:46:37.097392  262001 cli_runner.go:164] Run: docker network inspect addons-184548
	W1025 09:46:37.112662  262001 cli_runner.go:211] docker network inspect addons-184548 returned with exit code 1
	I1025 09:46:37.112696  262001 network_create.go:287] error running [docker network inspect addons-184548]: docker network inspect addons-184548: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-184548 not found
	I1025 09:46:37.112711  262001 network_create.go:289] output of [docker network inspect addons-184548]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-184548 not found
	
	** /stderr **
	I1025 09:46:37.112820  262001 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:46:37.130736  262001 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400197e5b0}
	I1025 09:46:37.130776  262001 network_create.go:124] attempt to create docker network addons-184548 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 09:46:37.130842  262001 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-184548 addons-184548
	I1025 09:46:37.189293  262001 network_create.go:108] docker network addons-184548 192.168.49.0/24 created
	I1025 09:46:37.189332  262001 kic.go:121] calculated static IP "192.168.49.2" for the "addons-184548" container
	I1025 09:46:37.189405  262001 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:46:37.204558  262001 cli_runner.go:164] Run: docker volume create addons-184548 --label name.minikube.sigs.k8s.io=addons-184548 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:46:37.223677  262001 oci.go:103] Successfully created a docker volume addons-184548
	I1025 09:46:37.223768  262001 cli_runner.go:164] Run: docker run --rm --name addons-184548-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-184548 --entrypoint /usr/bin/test -v addons-184548:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:46:39.327210  262001 cli_runner.go:217] Completed: docker run --rm --name addons-184548-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-184548 --entrypoint /usr/bin/test -v addons-184548:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.103401574s)
	I1025 09:46:39.327253  262001 oci.go:107] Successfully prepared a docker volume addons-184548
	I1025 09:46:39.327285  262001 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:46:39.327306  262001 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:46:39.327367  262001 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-184548:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 09:46:43.790608  262001 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-184548:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.463197984s)
	I1025 09:46:43.790640  262001 kic.go:203] duration metric: took 4.463331163s to extract preloaded images to volume ...
	W1025 09:46:43.790787  262001 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 09:46:43.790889  262001 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:46:43.845544  262001 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-184548 --name addons-184548 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-184548 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-184548 --network addons-184548 --ip 192.168.49.2 --volume addons-184548:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:46:44.153265  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Running}}
	I1025 09:46:44.183122  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:46:44.207472  262001 cli_runner.go:164] Run: docker exec addons-184548 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:46:44.265188  262001 oci.go:144] the created container "addons-184548" has a running status.
	I1025 09:46:44.265219  262001 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa...
	I1025 09:46:45.052696  262001 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:46:45.077254  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:46:45.097434  262001 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:46:45.097456  262001 kic_runner.go:114] Args: [docker exec --privileged addons-184548 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:46:45.169513  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:46:45.202667  262001 machine.go:93] provisionDockerMachine start ...
	I1025 09:46:45.204389  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:46:45.239505  262001 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:45.239884  262001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1025 09:46:45.239910  262001 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:46:45.240763  262001 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 09:46:48.389631  262001 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-184548
	
	I1025 09:46:48.389657  262001 ubuntu.go:182] provisioning hostname "addons-184548"
	I1025 09:46:48.389959  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:46:48.415500  262001 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:48.415825  262001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1025 09:46:48.415845  262001 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-184548 && echo "addons-184548" | sudo tee /etc/hostname
	I1025 09:46:48.571490  262001 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-184548
	
	I1025 09:46:48.571584  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:46:48.589303  262001 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:48.589612  262001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1025 09:46:48.589635  262001 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-184548' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-184548/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-184548' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:46:48.738139  262001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:46:48.738163  262001 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 09:46:48.738182  262001 ubuntu.go:190] setting up certificates
	I1025 09:46:48.738192  262001 provision.go:84] configureAuth start
	I1025 09:46:48.738253  262001 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-184548
	I1025 09:46:48.755924  262001 provision.go:143] copyHostCerts
	I1025 09:46:48.756017  262001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 09:46:48.756150  262001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 09:46:48.756219  262001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 09:46:48.756270  262001 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.addons-184548 san=[127.0.0.1 192.168.49.2 addons-184548 localhost minikube]
	I1025 09:46:49.069873  262001 provision.go:177] copyRemoteCerts
	I1025 09:46:49.069938  262001 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:46:49.069999  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:46:49.087428  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:46:49.189872  262001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:46:49.207405  262001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 09:46:49.226290  262001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:46:49.244537  262001 provision.go:87] duration metric: took 506.321462ms to configureAuth
	I1025 09:46:49.244566  262001 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:46:49.244759  262001 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:46:49.244874  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:46:49.262954  262001 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:49.263302  262001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1025 09:46:49.263323  262001 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:46:49.521119  262001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:46:49.521143  262001 machine.go:96] duration metric: took 4.316842388s to provisionDockerMachine
	I1025 09:46:49.521154  262001 client.go:171] duration metric: took 13.590528688s to LocalClient.Create
	I1025 09:46:49.521167  262001 start.go:167] duration metric: took 13.590591335s to libmachine.API.Create "addons-184548"
	I1025 09:46:49.521175  262001 start.go:293] postStartSetup for "addons-184548" (driver="docker")
	I1025 09:46:49.521185  262001 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:46:49.521259  262001 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:46:49.521299  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:46:49.540081  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:46:49.646496  262001 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:46:49.649944  262001 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:46:49.649975  262001 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:46:49.650007  262001 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 09:46:49.650086  262001 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 09:46:49.650115  262001 start.go:296] duration metric: took 128.934716ms for postStartSetup
	I1025 09:46:49.650441  262001 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-184548
	I1025 09:46:49.668051  262001 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/config.json ...
	I1025 09:46:49.668337  262001 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:46:49.668386  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:46:49.684919  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:46:49.786965  262001 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:46:49.791690  262001 start.go:128] duration metric: took 13.864815963s to createHost
	I1025 09:46:49.791713  262001 start.go:83] releasing machines lock for "addons-184548", held for 13.864960358s
	I1025 09:46:49.791788  262001 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-184548
	I1025 09:46:49.809011  262001 ssh_runner.go:195] Run: cat /version.json
	I1025 09:46:49.809067  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:46:49.809330  262001 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:46:49.809400  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:46:49.829044  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:46:49.836852  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:46:49.929781  262001 ssh_runner.go:195] Run: systemctl --version
	I1025 09:46:50.023136  262001 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:46:50.069153  262001 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:46:50.073746  262001 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:46:50.073828  262001 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:46:50.105565  262001 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 09:46:50.105642  262001 start.go:495] detecting cgroup driver to use...
	I1025 09:46:50.105712  262001 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 09:46:50.105792  262001 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:46:50.125319  262001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:46:50.138818  262001 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:46:50.138917  262001 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:46:50.156283  262001 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:46:50.175312  262001 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:46:50.288956  262001 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:46:50.411532  262001 docker.go:234] disabling docker service ...
	I1025 09:46:50.411616  262001 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:46:50.435605  262001 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:46:50.449662  262001 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:46:50.568766  262001 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:46:50.693403  262001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:46:50.705781  262001 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:46:50.720017  262001 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:46:50.720132  262001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:50.728549  262001 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:46:50.728657  262001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:50.737214  262001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:50.745676  262001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:50.754458  262001 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:46:50.762095  262001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:50.770982  262001 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:50.788951  262001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:50.797650  262001 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:46:50.804936  262001 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:46:50.812275  262001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:46:50.923320  262001 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:46:51.048496  262001 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:46:51.048583  262001 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:46:51.052466  262001 start.go:563] Will wait 60s for crictl version
	I1025 09:46:51.052528  262001 ssh_runner.go:195] Run: which crictl
	I1025 09:46:51.056056  262001 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:46:51.081881  262001 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:46:51.082022  262001 ssh_runner.go:195] Run: crio --version
	I1025 09:46:51.113909  262001 ssh_runner.go:195] Run: crio --version
	I1025 09:46:51.150964  262001 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:46:51.153867  262001 cli_runner.go:164] Run: docker network inspect addons-184548 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:46:51.171563  262001 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 09:46:51.175883  262001 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:46:51.186404  262001 kubeadm.go:883] updating cluster {Name:addons-184548 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-184548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:46:51.186525  262001 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:46:51.186590  262001 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:46:51.221268  262001 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:46:51.221295  262001 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:46:51.221373  262001 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:46:51.247689  262001 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:46:51.247713  262001 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:46:51.247721  262001 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1025 09:46:51.247808  262001 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-184548 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-184548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:46:51.247892  262001 ssh_runner.go:195] Run: crio config
	I1025 09:46:51.322518  262001 cni.go:84] Creating CNI manager for ""
	I1025 09:46:51.322540  262001 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:46:51.322556  262001 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:46:51.322606  262001 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-184548 NodeName:addons-184548 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:46:51.322777  262001 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-184548"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:46:51.322877  262001 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:46:51.331139  262001 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:46:51.331210  262001 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:46:51.339127  262001 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1025 09:46:51.352931  262001 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:46:51.366284  262001 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1025 09:46:51.379245  262001 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:46:51.382872  262001 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:46:51.393279  262001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:46:51.519648  262001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:46:51.534951  262001 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548 for IP: 192.168.49.2
	I1025 09:46:51.535015  262001 certs.go:195] generating shared ca certs ...
	I1025 09:46:51.535048  262001 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:51.535205  262001 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 09:46:51.674885  262001 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt ...
	I1025 09:46:51.674920  262001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt: {Name:mk17b6c331a07a17ef84fde02319838a2ef3698b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:51.675147  262001 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key ...
	I1025 09:46:51.675162  262001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key: {Name:mkc75db84f781c6e360c2b5ee59238e50158dd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:51.675253  262001 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 09:46:53.169401  262001 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt ...
	I1025 09:46:53.169433  262001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt: {Name:mk6204789820541fbec61e8b3338e45bfbabb8eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:53.169603  262001 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key ...
	I1025 09:46:53.169619  262001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key: {Name:mk03dd36e25909c48f80a11b0608190f600537f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:53.169694  262001 certs.go:257] generating profile certs ...
	I1025 09:46:53.169755  262001 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.key
	I1025 09:46:53.169776  262001 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt with IP's: []
	I1025 09:46:54.423246  262001 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt ...
	I1025 09:46:54.423278  262001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: {Name:mk17665482a38819e487cea64dec596148ccbdad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:54.423464  262001 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.key ...
	I1025 09:46:54.423477  262001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.key: {Name:mk7016254a354af6a06fafff6e0189bc8732f0ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:54.423562  262001 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/apiserver.key.9f359fbd
	I1025 09:46:54.423583  262001 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/apiserver.crt.9f359fbd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1025 09:46:54.550325  262001 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/apiserver.crt.9f359fbd ...
	I1025 09:46:54.550357  262001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/apiserver.crt.9f359fbd: {Name:mk2a068cc8085d4305be8fdd0e0e528d7c5187c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:54.550522  262001 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/apiserver.key.9f359fbd ...
	I1025 09:46:54.550536  262001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/apiserver.key.9f359fbd: {Name:mk88472b91a7a8d5389456f0257638c3f1be3f40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:54.550634  262001 certs.go:382] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/apiserver.crt.9f359fbd -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/apiserver.crt
	I1025 09:46:54.550726  262001 certs.go:386] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/apiserver.key.9f359fbd -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/apiserver.key
	I1025 09:46:54.550782  262001 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/proxy-client.key
	I1025 09:46:54.550804  262001 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/proxy-client.crt with IP's: []
	I1025 09:46:54.793614  262001 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/proxy-client.crt ...
	I1025 09:46:54.793646  262001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/proxy-client.crt: {Name:mk0f2992407286a8eb37719eeb18c6ecc353fe65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:54.793820  262001 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/proxy-client.key ...
	I1025 09:46:54.793835  262001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/proxy-client.key: {Name:mk4b9a90f48e0226c6f68f80a7710e3117e55c95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:54.794048  262001 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:46:54.794097  262001 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:46:54.794126  262001 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:46:54.794157  262001 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 09:46:54.794726  262001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:46:54.812457  262001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 09:46:54.831363  262001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:46:54.849780  262001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 09:46:54.866631  262001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 09:46:54.883537  262001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 09:46:54.900536  262001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:46:54.916750  262001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 09:46:54.934038  262001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:46:54.951449  262001 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:46:54.964598  262001 ssh_runner.go:195] Run: openssl version
	I1025 09:46:54.970745  262001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:46:54.979436  262001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:46:54.983084  262001 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:46:54.983182  262001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:46:55.024845  262001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:46:55.035295  262001 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:46:55.039499  262001 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:46:55.039550  262001 kubeadm.go:400] StartCluster: {Name:addons-184548 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-184548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:46:55.039627  262001 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:46:55.039693  262001 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:46:55.072139  262001 cri.go:89] found id: ""
	I1025 09:46:55.072287  262001 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:46:55.080462  262001 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:46:55.088777  262001 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:46:55.088863  262001 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:46:55.096964  262001 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:46:55.097033  262001 kubeadm.go:157] found existing configuration files:
	
	I1025 09:46:55.097138  262001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:46:55.105093  262001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:46:55.105186  262001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:46:55.113349  262001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:46:55.123394  262001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:46:55.123479  262001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:46:55.131062  262001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:46:55.139008  262001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:46:55.139117  262001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:46:55.146910  262001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:46:55.155408  262001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:46:55.155511  262001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:46:55.163613  262001 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:46:55.228034  262001 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1025 09:46:55.228279  262001 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 09:46:55.297471  262001 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:47:12.323653  262001 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:47:12.323718  262001 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:47:12.323809  262001 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:47:12.323877  262001 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 09:47:12.323918  262001 kubeadm.go:318] OS: Linux
	I1025 09:47:12.323965  262001 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:47:12.324016  262001 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 09:47:12.324064  262001 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:47:12.324114  262001 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:47:12.324165  262001 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:47:12.324215  262001 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:47:12.324262  262001 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:47:12.324312  262001 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:47:12.324360  262001 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 09:47:12.324434  262001 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:47:12.324532  262001 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:47:12.324624  262001 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:47:12.324689  262001 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:47:12.327668  262001 out.go:252]   - Generating certificates and keys ...
	I1025 09:47:12.327772  262001 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:47:12.327894  262001 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:47:12.327982  262001 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:47:12.328047  262001 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:47:12.328117  262001 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:47:12.328171  262001 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:47:12.328229  262001 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:47:12.328353  262001 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-184548 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 09:47:12.328417  262001 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:47:12.328552  262001 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-184548 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 09:47:12.328627  262001 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:47:12.328701  262001 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:47:12.328755  262001 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:47:12.328832  262001 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:47:12.328962  262001 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:47:12.329042  262001 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:47:12.329107  262001 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:47:12.329175  262001 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:47:12.329236  262001 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:47:12.329345  262001 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:47:12.329422  262001 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 09:47:12.332579  262001 out.go:252]   - Booting up control plane ...
	I1025 09:47:12.332716  262001 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:47:12.332833  262001 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:47:12.332940  262001 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:47:12.333075  262001 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:47:12.333179  262001 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:47:12.333346  262001 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:47:12.333468  262001 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:47:12.333522  262001 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:47:12.333710  262001 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:47:12.333837  262001 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:47:12.333904  262001 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.501068458s
	I1025 09:47:12.334032  262001 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:47:12.334135  262001 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1025 09:47:12.334246  262001 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:47:12.334333  262001 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 09:47:12.334426  262001 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.518311674s
	I1025 09:47:12.334517  262001 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.42953607s
	I1025 09:47:12.334610  262001 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001373423s
	I1025 09:47:12.334755  262001 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:47:12.334921  262001 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:47:12.335008  262001 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:47:12.335247  262001 kubeadm.go:318] [mark-control-plane] Marking the node addons-184548 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:47:12.335330  262001 kubeadm.go:318] [bootstrap-token] Using token: 7qak07.9wrl07i3bus0m2or
	I1025 09:47:12.340215  262001 out.go:252]   - Configuring RBAC rules ...
	I1025 09:47:12.340345  262001 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:47:12.340440  262001 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:47:12.340600  262001 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:47:12.340739  262001 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:47:12.340865  262001 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:47:12.340958  262001 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:47:12.341080  262001 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:47:12.341129  262001 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:47:12.341181  262001 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:47:12.341190  262001 kubeadm.go:318] 
	I1025 09:47:12.341252  262001 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:47:12.341259  262001 kubeadm.go:318] 
	I1025 09:47:12.341370  262001 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:47:12.341428  262001 kubeadm.go:318] 
	I1025 09:47:12.341461  262001 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:47:12.341529  262001 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:47:12.341588  262001 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:47:12.341598  262001 kubeadm.go:318] 
	I1025 09:47:12.341655  262001 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:47:12.341663  262001 kubeadm.go:318] 
	I1025 09:47:12.341713  262001 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:47:12.341721  262001 kubeadm.go:318] 
	I1025 09:47:12.341776  262001 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:47:12.341859  262001 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:47:12.341936  262001 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:47:12.341945  262001 kubeadm.go:318] 
	I1025 09:47:12.342054  262001 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:47:12.342140  262001 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:47:12.342150  262001 kubeadm.go:318] 
	I1025 09:47:12.342238  262001 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 7qak07.9wrl07i3bus0m2or \
	I1025 09:47:12.342350  262001 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:537c864aeffbd98a53a718d258f7bd1f28aa4a0d15716971a35abdfdaaeb28d5 \
	I1025 09:47:12.342375  262001 kubeadm.go:318] 	--control-plane 
	I1025 09:47:12.342383  262001 kubeadm.go:318] 
	I1025 09:47:12.342472  262001 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:47:12.342480  262001 kubeadm.go:318] 
	I1025 09:47:12.342566  262001 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 7qak07.9wrl07i3bus0m2or \
	I1025 09:47:12.342690  262001 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:537c864aeffbd98a53a718d258f7bd1f28aa4a0d15716971a35abdfdaaeb28d5 
	I1025 09:47:12.342704  262001 cni.go:84] Creating CNI manager for ""
	I1025 09:47:12.342712  262001 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:47:12.345926  262001 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:47:12.349026  262001 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:47:12.354280  262001 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 09:47:12.354304  262001 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:47:12.369223  262001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:47:12.677027  262001 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:47:12.677239  262001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:47:12.677293  262001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-184548 minikube.k8s.io/updated_at=2025_10_25T09_47_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689 minikube.k8s.io/name=addons-184548 minikube.k8s.io/primary=true
	I1025 09:47:12.852071  262001 ops.go:34] apiserver oom_adj: -16
	I1025 09:47:12.852213  262001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:47:13.352991  262001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:47:13.852621  262001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:47:14.353267  262001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:47:14.852347  262001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:47:15.353210  262001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:47:15.852331  262001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:47:16.352346  262001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:47:16.460189  262001 kubeadm.go:1113] duration metric: took 3.783065612s to wait for elevateKubeSystemPrivileges
	I1025 09:47:16.460214  262001 kubeadm.go:402] duration metric: took 21.420667233s to StartCluster
	I1025 09:47:16.460232  262001 settings.go:142] acquiring lock: {Name:mk78071aea90144fb2e8f63b90de4792d7a3522a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:47:16.460339  262001 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 09:47:16.460740  262001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:47:16.460921  262001 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:47:16.461110  262001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 09:47:16.461380  262001 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:47:16.461439  262001 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1025 09:47:16.461541  262001 addons.go:69] Setting yakd=true in profile "addons-184548"
	I1025 09:47:16.461560  262001 addons.go:238] Setting addon yakd=true in "addons-184548"
	I1025 09:47:16.461583  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.462105  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.462655  262001 addons.go:69] Setting metrics-server=true in profile "addons-184548"
	I1025 09:47:16.462789  262001 addons.go:238] Setting addon metrics-server=true in "addons-184548"
	I1025 09:47:16.462822  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.462830  262001 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-184548"
	I1025 09:47:16.462848  262001 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-184548"
	I1025 09:47:16.462873  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.463250  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.463324  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.466101  262001 addons.go:69] Setting registry=true in profile "addons-184548"
	I1025 09:47:16.466133  262001 addons.go:238] Setting addon registry=true in "addons-184548"
	I1025 09:47:16.466181  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.466705  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.467373  262001 addons.go:69] Setting registry-creds=true in profile "addons-184548"
	I1025 09:47:16.493032  262001 addons.go:238] Setting addon registry-creds=true in "addons-184548"
	I1025 09:47:16.493081  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.493560  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.467388  262001 addons.go:69] Setting storage-provisioner=true in profile "addons-184548"
	I1025 09:47:16.510175  262001 addons.go:238] Setting addon storage-provisioner=true in "addons-184548"
	I1025 09:47:16.510216  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.510685  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.467395  262001 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-184548"
	I1025 09:47:16.511979  262001 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-184548"
	I1025 09:47:16.512291  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.467401  262001 addons.go:69] Setting volcano=true in profile "addons-184548"
	I1025 09:47:16.529400  262001 addons.go:238] Setting addon volcano=true in "addons-184548"
	I1025 09:47:16.529447  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.530004  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.467427  262001 addons.go:69] Setting volumesnapshots=true in profile "addons-184548"
	I1025 09:47:16.539791  262001 addons.go:238] Setting addon volumesnapshots=true in "addons-184548"
	I1025 09:47:16.491572  262001 addons.go:69] Setting cloud-spanner=true in profile "addons-184548"
	I1025 09:47:16.539828  262001 addons.go:238] Setting addon cloud-spanner=true in "addons-184548"
	I1025 09:47:16.539856  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.491613  262001 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-184548"
	I1025 09:47:16.540010  262001 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-184548"
	I1025 09:47:16.540029  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.491625  262001 addons.go:69] Setting default-storageclass=true in profile "addons-184548"
	I1025 09:47:16.540119  262001 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-184548"
	I1025 09:47:16.491632  262001 addons.go:69] Setting gcp-auth=true in profile "addons-184548"
	I1025 09:47:16.540197  262001 mustload.go:65] Loading cluster: addons-184548
	I1025 09:47:16.491638  262001 addons.go:69] Setting ingress=true in profile "addons-184548"
	I1025 09:47:16.540287  262001 addons.go:238] Setting addon ingress=true in "addons-184548"
	I1025 09:47:16.540311  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.491644  262001 addons.go:69] Setting ingress-dns=true in profile "addons-184548"
	I1025 09:47:16.540391  262001 addons.go:238] Setting addon ingress-dns=true in "addons-184548"
	I1025 09:47:16.540408  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.491666  262001 addons.go:69] Setting inspektor-gadget=true in profile "addons-184548"
	I1025 09:47:16.540495  262001 addons.go:238] Setting addon inspektor-gadget=true in "addons-184548"
	I1025 09:47:16.540508  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.491811  262001 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-184548"
	I1025 09:47:16.540581  262001 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-184548"
	I1025 09:47:16.540594  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.492462  262001 out.go:179] * Verifying Kubernetes components...
	I1025 09:47:16.551541  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.552929  262001 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:47:16.553242  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.553842  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.566045  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.566498  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.575435  262001 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1025 09:47:16.576011  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.578392  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.593861  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.598767  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.614243  262001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:47:16.617885  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.617909  262001 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 09:47:16.617925  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1025 09:47:16.618014  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:16.632821  262001 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1025 09:47:16.647650  262001 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1025 09:47:16.649900  262001 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1025 09:47:16.652737  262001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 09:47:16.652992  262001 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 09:47:16.653269  262001 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 09:47:16.653419  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:16.664729  262001 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1025 09:47:16.664761  262001 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1025 09:47:16.664861  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:16.697731  262001 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1025 09:47:16.698246  262001 out.go:179]   - Using image docker.io/registry:3.0.0
	I1025 09:47:16.737645  262001 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-184548"
	I1025 09:47:16.737803  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.748102  262001 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 09:47:16.748167  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1025 09:47:16.748289  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:16.773349  262001 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1025 09:47:16.773423  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1025 09:47:16.773504  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:16.797163  262001 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	W1025 09:47:16.798326  262001 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1025 09:47:16.800430  262001 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1025 09:47:16.800492  262001 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1025 09:47:16.800583  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:16.806057  262001 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:47:16.809218  262001 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:47:16.809282  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:47:16.809379  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:16.839620  262001 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1025 09:47:16.841829  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.842781  262001 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 09:47:16.842829  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1025 09:47:16.842902  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:16.866007  262001 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1025 09:47:16.868492  262001 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1025 09:47:16.868552  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1025 09:47:16.868638  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:16.885765  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.886082  262001 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1025 09:47:16.893664  262001 addons.go:238] Setting addon default-storageclass=true in "addons-184548"
	I1025 09:47:16.893710  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.897861  262001 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:47:16.898244  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.898654  262001 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1025 09:47:16.901964  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:16.903269  262001 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1025 09:47:16.906141  262001 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1025 09:47:16.906196  262001 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1025 09:47:16.906206  262001 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1025 09:47:16.906268  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:16.903359  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:16.925663  262001 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1025 09:47:16.928615  262001 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1025 09:47:16.935925  262001 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1025 09:47:16.938881  262001 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:47:16.939980  262001 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1025 09:47:16.942448  262001 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 09:47:16.942472  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1025 09:47:16.942538  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:16.950095  262001 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 09:47:16.950126  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1025 09:47:16.950194  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:16.958323  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:16.982453  262001 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1025 09:47:16.990243  262001 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1025 09:47:16.997725  262001 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1025 09:47:17.002740  262001 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1025 09:47:17.002830  262001 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1025 09:47:17.002955  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:17.012107  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:17.022531  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:17.048265  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:17.055842  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:17.057375  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:17.059630  262001 out.go:179]   - Using image docker.io/busybox:stable
	I1025 09:47:17.078142  262001 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1025 09:47:17.088065  262001 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 09:47:17.088111  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1025 09:47:17.088186  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:17.112754  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:17.126071  262001 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:47:17.126100  262001 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:47:17.126173  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:17.126538  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:17.135428  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:17.151347  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:17.154127  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:17.169566  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	W1025 09:47:17.171823  262001 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 09:47:17.171854  262001 retry.go:31] will retry after 132.984624ms: ssh: handshake failed: EOF
	W1025 09:47:17.172373  262001 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 09:47:17.172397  262001 retry.go:31] will retry after 304.151219ms: ssh: handshake failed: EOF
	W1025 09:47:17.172801  262001 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 09:47:17.172815  262001 retry.go:31] will retry after 244.858636ms: ssh: handshake failed: EOF
	W1025 09:47:17.173359  262001 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 09:47:17.173380  262001 retry.go:31] will retry after 323.234829ms: ssh: handshake failed: EOF
	I1025 09:47:17.182642  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:17.183318  262001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1025 09:47:17.477772  262001 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 09:47:17.477843  262001 retry.go:31] will retry after 541.807175ms: ssh: handshake failed: EOF
	I1025 09:47:17.717968  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 09:47:17.743604  262001 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 09:47:17.743677  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1025 09:47:17.793746  262001 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1025 09:47:17.793820  262001 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1025 09:47:17.882972  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:47:17.945548  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 09:47:17.974595  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1025 09:47:17.985213  262001 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1025 09:47:17.985289  262001 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1025 09:47:18.013356  262001 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1025 09:47:18.013443  262001 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1025 09:47:18.027584  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 09:47:18.044156  262001 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 09:47:18.044231  262001 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 09:47:18.058274  262001 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1025 09:47:18.058357  262001 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1025 09:47:18.140299  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:47:18.177245  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 09:47:18.185312  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 09:47:18.203307  262001 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:47:18.203375  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1025 09:47:18.206618  262001 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1025 09:47:18.206636  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1025 09:47:18.208836  262001 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 09:47:18.208854  262001 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 09:47:18.208916  262001 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1025 09:47:18.208921  262001 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1025 09:47:18.211998  262001 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1025 09:47:18.212020  262001 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1025 09:47:18.375378  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1025 09:47:18.395653  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 09:47:18.410483  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 09:47:18.435959  262001 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1025 09:47:18.436030  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1025 09:47:18.439536  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:47:18.441865  262001 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1025 09:47:18.441943  262001 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1025 09:47:18.516724  262001 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.333356975s)
	I1025 09:47:18.516829  262001 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.863624201s)
	I1025 09:47:18.517042  262001 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1025 09:47:18.518170  262001 node_ready.go:35] waiting up to 6m0s for node "addons-184548" to be "Ready" ...
	I1025 09:47:18.657251  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1025 09:47:18.681672  262001 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1025 09:47:18.681738  262001 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1025 09:47:18.884306  262001 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1025 09:47:18.884371  262001 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1025 09:47:18.929908  262001 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1025 09:47:18.929976  262001 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1025 09:47:19.024314  262001 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-184548" context rescaled to 1 replicas
	I1025 09:47:19.060543  262001 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 09:47:19.060569  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1025 09:47:19.208768  262001 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1025 09:47:19.208814  262001 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1025 09:47:19.335754  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.617629589s)
	I1025 09:47:19.347339  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 09:47:19.463280  262001 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1025 09:47:19.463308  262001 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1025 09:47:19.692617  262001 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1025 09:47:19.692643  262001 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1025 09:47:19.935307  262001 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1025 09:47:19.935381  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1025 09:47:20.126371  262001 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1025 09:47:20.126399  262001 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1025 09:47:20.379339  262001 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1025 09:47:20.379368  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	W1025 09:47:20.551446  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:20.578243  262001 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1025 09:47:20.578267  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1025 09:47:20.762115  262001 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 09:47:20.762198  262001 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1025 09:47:20.960208  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 09:47:21.943353  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.060301483s)
	I1025 09:47:21.943460  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.99783848s)
	I1025 09:47:21.943513  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.968833935s)
	I1025 09:47:21.943552  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.915895862s)
	I1025 09:47:21.943599  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.803234157s)
	I1025 09:47:21.943846  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.766522472s)
	I1025 09:47:22.911668  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.726273072s)
	I1025 09:47:22.911748  262001 addons.go:479] Verifying addon ingress=true in "addons-184548"
	I1025 09:47:22.912267  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.536806479s)
	I1025 09:47:22.912300  262001 addons.go:479] Verifying addon registry=true in "addons-184548"
	I1025 09:47:22.912327  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.472677073s)
	W1025 09:47:22.912368  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:22.912416  262001 retry.go:31] will retry after 185.956785ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:22.912634  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.516905503s)
	I1025 09:47:22.912655  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.56528628s)
	W1025 09:47:22.912682  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 09:47:22.912695  262001 retry.go:31] will retry after 295.663661ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 09:47:22.912718  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.502156172s)
	I1025 09:47:22.912730  262001 addons.go:479] Verifying addon metrics-server=true in "addons-184548"
	I1025 09:47:22.912636  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.255292132s)
	I1025 09:47:22.915270  262001 out.go:179] * Verifying ingress addon...
	I1025 09:47:22.917230  262001 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-184548 service yakd-dashboard -n yakd-dashboard
	
	I1025 09:47:22.917356  262001 out.go:179] * Verifying registry addon...
	I1025 09:47:22.921583  262001 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1025 09:47:22.921628  262001 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1025 09:47:22.930269  262001 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 09:47:22.930295  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:22.933482  262001 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1025 09:47:22.933507  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:23.023115  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:23.099041  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:47:23.208557  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 09:47:23.236433  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.27610319s)
	I1025 09:47:23.236471  262001 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-184548"
	I1025 09:47:23.239658  262001 out.go:179] * Verifying csi-hostpath-driver addon...
	I1025 09:47:23.243278  262001 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1025 09:47:23.253374  262001 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 09:47:23.253399  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:23.426668  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:23.426836  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:23.749193  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:23.927548  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:23.927894  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:24.143418  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.04433479s)
	W1025 09:47:24.143469  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:24.143502  262001 retry.go:31] will retry after 557.55518ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:24.247123  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:24.426922  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:24.427262  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:24.590642  262001 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1025 09:47:24.590722  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:24.609896  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:24.702135  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:47:24.740787  262001 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1025 09:47:24.747781  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:24.760791  262001 addons.go:238] Setting addon gcp-auth=true in "addons-184548"
	I1025 09:47:24.760881  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:24.761386  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:24.781662  262001 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1025 09:47:24.781733  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:24.800648  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:24.926679  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:24.927094  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:25.247190  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:25.427023  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:25.427895  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:25.522331  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:25.748945  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:25.926656  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:25.926956  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:26.071703  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.863042275s)
	I1025 09:47:26.071831  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.369661277s)
	W1025 09:47:26.071916  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:26.071942  262001 retry.go:31] will retry after 768.223432ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:26.071877  262001 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.290191446s)
	I1025 09:47:26.075150  262001 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1025 09:47:26.078123  262001 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:47:26.081070  262001 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1025 09:47:26.081110  262001 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1025 09:47:26.095644  262001 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1025 09:47:26.095667  262001 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1025 09:47:26.109929  262001 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 09:47:26.109956  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1025 09:47:26.124772  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 09:47:26.247602  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:26.426644  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:26.427014  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:26.592437  262001 addons.go:479] Verifying addon gcp-auth=true in "addons-184548"
	I1025 09:47:26.596639  262001 out.go:179] * Verifying gcp-auth addon...
	I1025 09:47:26.609423  262001 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1025 09:47:26.617406  262001 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1025 09:47:26.617433  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:26.751458  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:26.840903  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:47:26.926903  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:26.927065  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:27.113326  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:27.246530  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:27.426613  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:27.426950  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:27.613335  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:47:27.657352  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:27.657385  262001 retry.go:31] will retry after 1.097014533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:27.747904  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:27.925561  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:27.925889  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:28.022167  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:28.113162  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:28.247163  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:28.425557  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:28.425814  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:28.612883  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:28.746787  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:28.754859  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:47:28.926680  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:28.927683  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:29.113445  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:29.247666  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:29.426013  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:29.426783  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:29.571898  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:29.571939  262001 retry.go:31] will retry after 1.599607704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:29.612735  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:29.746870  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:29.924926  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:29.925075  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:30.022958  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:30.114230  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:30.247670  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:30.425147  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:30.425391  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:30.613350  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:30.746452  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:30.924475  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:30.924624  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:31.113763  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:31.171794  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:47:31.247499  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:31.425208  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:31.425435  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:31.613335  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:31.750155  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:31.926723  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:31.926868  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1025 09:47:32.019166  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:32.019195  262001 retry.go:31] will retry after 1.136319033s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:32.113602  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:32.247283  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:32.425481  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:32.426844  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:32.521782  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:32.612330  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:32.746292  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:32.925023  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:32.925215  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:33.113070  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:33.156135  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:47:33.246444  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:33.426482  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:33.426577  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:33.613116  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:33.749642  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:33.926927  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:33.927432  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:33.973511  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:33.973599  262001 retry.go:31] will retry after 3.205495295s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:34.113200  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:34.247203  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:34.425961  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:34.426203  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:34.613464  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:34.746654  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:34.925260  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:34.925267  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:35.022404  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:35.113009  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:35.246811  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:35.425359  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:35.425771  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:35.613546  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:35.746814  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:35.924998  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:35.925034  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:36.113710  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:36.246958  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:36.424898  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:36.425070  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:36.613195  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:36.745957  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:36.925748  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:36.925930  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:37.112905  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:37.180025  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:47:37.249843  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:37.425770  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:37.425972  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:37.530999  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:37.616991  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:37.750363  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:37.925209  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:37.927094  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:38.061449  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:38.061537  262001 retry.go:31] will retry after 4.791342408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:38.113518  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:38.246874  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:38.425172  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:38.425308  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:38.612475  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:38.746297  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:38.925763  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:38.925931  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:39.112981  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:39.247021  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:39.425571  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:39.425633  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:39.613389  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:39.746561  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:39.924880  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:39.925092  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:40.023097  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:40.113055  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:40.246924  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:40.426151  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:40.426323  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:40.612848  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:40.746618  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:40.925662  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:40.925866  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:41.113702  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:41.247023  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:41.425926  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:41.426403  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:41.612936  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:41.747369  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:41.925264  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:41.925664  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:42.113800  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:42.247448  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:42.425817  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:42.426039  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:42.521916  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:42.613045  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:42.746941  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:42.853504  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:47:42.924651  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:42.926193  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:43.113359  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:43.246964  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:43.427041  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:43.427438  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:43.613243  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:47:43.665029  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:43.665063  262001 retry.go:31] will retry after 6.961055561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:43.747511  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:43.925729  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:43.925972  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:44.112939  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:44.246932  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:44.425090  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:44.425656  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:44.613032  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:44.746963  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:44.925007  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:44.925238  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1025 09:47:45.029607  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:45.116853  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:45.248044  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:45.425792  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:45.426053  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:45.612944  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:45.747155  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:45.925277  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:45.925449  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:46.112840  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:46.246587  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:46.424448  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:46.424922  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:46.612798  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:46.747022  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:46.925152  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:46.925417  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:47.113771  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:47.246941  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:47.425112  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:47.425288  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:47.521060  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:47.612906  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:47.746931  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:47.924957  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:47.925354  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:48.113497  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:48.246556  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:48.425876  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:48.425937  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:48.613364  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:48.746219  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:48.925693  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:48.926108  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:49.112645  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:49.252332  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:49.425693  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:49.426067  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:49.522031  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:49.612650  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:49.746644  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:49.924904  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:49.925160  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:50.112788  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:50.246724  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:50.425269  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:50.425535  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:50.613031  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:50.627205  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:47:50.746181  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:50.927573  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:50.928037  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:51.114723  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:51.250519  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:51.425241  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:51.425822  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:51.447333  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:51.447368  262001 retry.go:31] will retry after 13.336991842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:51.613697  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:51.746741  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:51.925374  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:51.925544  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:52.021678  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:52.113818  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:52.246878  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:52.425415  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:52.426382  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:52.613101  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:52.747315  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:52.924737  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:52.925012  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:53.114138  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:53.247302  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:53.425884  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:53.425927  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:53.612937  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:53.746827  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:53.925363  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:53.925443  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1025 09:47:54.021833  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:54.113063  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:54.247171  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:54.425471  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:54.425680  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:54.613178  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:54.746954  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:54.925366  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:54.926091  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:55.113330  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:55.246248  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:55.425319  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:55.426559  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:55.613435  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:55.746082  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:55.925328  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:55.925442  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:56.112852  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:56.246809  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:56.424968  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:56.425572  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:56.521509  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:56.613505  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:56.746729  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:56.924921  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:56.925004  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:57.112745  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:57.246920  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:57.425069  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:57.425165  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:57.612934  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:57.746734  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:57.925036  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:57.926023  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:58.113317  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:58.247172  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:58.425574  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:58.425685  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1025 09:47:58.521803  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:58.612783  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:58.794733  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:58.935859  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:58.936002  262001 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 09:47:58.936019  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:59.027423  262001 node_ready.go:49] node "addons-184548" is "Ready"
	I1025 09:47:59.027454  262001 node_ready.go:38] duration metric: took 40.509259018s for node "addons-184548" to be "Ready" ...
	I1025 09:47:59.027468  262001 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:47:59.027527  262001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:47:59.067839  262001 api_server.go:72] duration metric: took 42.60689062s to wait for apiserver process to appear ...
	I1025 09:47:59.067866  262001 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:47:59.067887  262001 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 09:47:59.077397  262001 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1025 09:47:59.108157  262001 api_server.go:141] control plane version: v1.34.1
	I1025 09:47:59.108191  262001 api_server.go:131] duration metric: took 40.317466ms to wait for apiserver health ...
	I1025 09:47:59.108202  262001 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:47:59.214814  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:59.215448  262001 system_pods.go:59] 19 kube-system pods found
	I1025 09:47:59.215499  262001 system_pods.go:61] "coredns-66bc5c9577-hq8d8" [5f9e2449-9a59-40bb-9e50-c090419fd504] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:47:59.215507  262001 system_pods.go:61] "csi-hostpath-attacher-0" [044ac472-ff62-437c-ba08-8aa8f5d30315] Pending
	I1025 09:47:59.215520  262001 system_pods.go:61] "csi-hostpath-resizer-0" [60ca0fd7-d65f-4c71-941e-82f8ec090187] Pending
	I1025 09:47:59.215529  262001 system_pods.go:61] "csi-hostpathplugin-4jzcx" [bf0f0faf-4e29-4ab0-ba77-2e407b834b53] Pending
	I1025 09:47:59.215533  262001 system_pods.go:61] "etcd-addons-184548" [32a1291f-4f04-44e5-847e-ecfe895fb1c3] Running
	I1025 09:47:59.215548  262001 system_pods.go:61] "kindnet-dn6n8" [3c3ad1a4-a426-4593-b520-3ddacbbcedbb] Running
	I1025 09:47:59.215553  262001 system_pods.go:61] "kube-apiserver-addons-184548" [bf7b7404-bdc0-4bde-bc5e-78b6c046427b] Running
	I1025 09:47:59.215557  262001 system_pods.go:61] "kube-controller-manager-addons-184548" [b201b806-15b2-4190-8310-a15384162b02] Running
	I1025 09:47:59.215569  262001 system_pods.go:61] "kube-ingress-dns-minikube" [065b8ebc-6b4a-4aef-81e0-455096dd9765] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:47:59.215575  262001 system_pods.go:61] "kube-proxy-clv7b" [98d8d78b-4b77-49e2-b0bb-0ffc37ef6b7d] Running
	I1025 09:47:59.215587  262001 system_pods.go:61] "kube-scheduler-addons-184548" [384e707f-4d85-4826-965d-60352bad843a] Running
	I1025 09:47:59.215596  262001 system_pods.go:61] "metrics-server-85b7d694d7-5mbb4" [473fd1dc-bcd8-4299-ae17-9e3bca061756] Pending
	I1025 09:47:59.215603  262001 system_pods.go:61] "nvidia-device-plugin-daemonset-7sktv" [d6a26aea-18d0-46d2-a809-bf7ec95759f6] Pending
	I1025 09:47:59.215617  262001 system_pods.go:61] "registry-6b586f9694-cft48" [3b3f9d6f-cbd8-4b92-987f-b61c282e6860] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:47:59.215628  262001 system_pods.go:61] "registry-creds-764b6fb674-dk8fg" [5d6b936c-c964-41cd-a147-a05337379ebc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:47:59.215646  262001 system_pods.go:61] "registry-proxy-l4vs6" [2d113e11-a239-4418-8da7-40a53e33fd75] Pending
	I1025 09:47:59.215651  262001 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2bqhf" [d754d289-7438-4581-aec3-5f6119282c1f] Pending
	I1025 09:47:59.215666  262001 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rlnlm" [a9f57908-6cfb-4029-b95f-56dcc3188ca2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:47:59.215679  262001 system_pods.go:61] "storage-provisioner" [6cfd6874-b84e-4d5a-8a54-04f7e12dbfcb] Pending
	I1025 09:47:59.215689  262001 system_pods.go:74] duration metric: took 107.477871ms to wait for pod list to return data ...
	I1025 09:47:59.215697  262001 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:47:59.233450  262001 default_sa.go:45] found service account: "default"
	I1025 09:47:59.233480  262001 default_sa.go:55] duration metric: took 17.776204ms for default service account to be created ...
	I1025 09:47:59.233498  262001 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:47:59.251651  262001 system_pods.go:86] 19 kube-system pods found
	I1025 09:47:59.251689  262001 system_pods.go:89] "coredns-66bc5c9577-hq8d8" [5f9e2449-9a59-40bb-9e50-c090419fd504] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:47:59.251699  262001 system_pods.go:89] "csi-hostpath-attacher-0" [044ac472-ff62-437c-ba08-8aa8f5d30315] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:47:59.251705  262001 system_pods.go:89] "csi-hostpath-resizer-0" [60ca0fd7-d65f-4c71-941e-82f8ec090187] Pending
	I1025 09:47:59.251711  262001 system_pods.go:89] "csi-hostpathplugin-4jzcx" [bf0f0faf-4e29-4ab0-ba77-2e407b834b53] Pending
	I1025 09:47:59.251715  262001 system_pods.go:89] "etcd-addons-184548" [32a1291f-4f04-44e5-847e-ecfe895fb1c3] Running
	I1025 09:47:59.251719  262001 system_pods.go:89] "kindnet-dn6n8" [3c3ad1a4-a426-4593-b520-3ddacbbcedbb] Running
	I1025 09:47:59.251723  262001 system_pods.go:89] "kube-apiserver-addons-184548" [bf7b7404-bdc0-4bde-bc5e-78b6c046427b] Running
	I1025 09:47:59.251729  262001 system_pods.go:89] "kube-controller-manager-addons-184548" [b201b806-15b2-4190-8310-a15384162b02] Running
	I1025 09:47:59.251740  262001 system_pods.go:89] "kube-ingress-dns-minikube" [065b8ebc-6b4a-4aef-81e0-455096dd9765] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:47:59.251744  262001 system_pods.go:89] "kube-proxy-clv7b" [98d8d78b-4b77-49e2-b0bb-0ffc37ef6b7d] Running
	I1025 09:47:59.251753  262001 system_pods.go:89] "kube-scheduler-addons-184548" [384e707f-4d85-4826-965d-60352bad843a] Running
	I1025 09:47:59.251757  262001 system_pods.go:89] "metrics-server-85b7d694d7-5mbb4" [473fd1dc-bcd8-4299-ae17-9e3bca061756] Pending
	I1025 09:47:59.251761  262001 system_pods.go:89] "nvidia-device-plugin-daemonset-7sktv" [d6a26aea-18d0-46d2-a809-bf7ec95759f6] Pending
	I1025 09:47:59.251775  262001 system_pods.go:89] "registry-6b586f9694-cft48" [3b3f9d6f-cbd8-4b92-987f-b61c282e6860] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:47:59.251781  262001 system_pods.go:89] "registry-creds-764b6fb674-dk8fg" [5d6b936c-c964-41cd-a147-a05337379ebc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:47:59.251785  262001 system_pods.go:89] "registry-proxy-l4vs6" [2d113e11-a239-4418-8da7-40a53e33fd75] Pending
	I1025 09:47:59.251791  262001 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2bqhf" [d754d289-7438-4581-aec3-5f6119282c1f] Pending
	I1025 09:47:59.251796  262001 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rlnlm" [a9f57908-6cfb-4029-b95f-56dcc3188ca2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:47:59.251800  262001 system_pods.go:89] "storage-provisioner" [6cfd6874-b84e-4d5a-8a54-04f7e12dbfcb] Pending
	I1025 09:47:59.251814  262001 retry.go:31] will retry after 267.072515ms: missing components: kube-dns
	I1025 09:47:59.261286  262001 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 09:47:59.261319  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:59.438120  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:59.448434  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:59.567641  262001 system_pods.go:86] 19 kube-system pods found
	I1025 09:47:59.567684  262001 system_pods.go:89] "coredns-66bc5c9577-hq8d8" [5f9e2449-9a59-40bb-9e50-c090419fd504] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:47:59.567699  262001 system_pods.go:89] "csi-hostpath-attacher-0" [044ac472-ff62-437c-ba08-8aa8f5d30315] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:47:59.567713  262001 system_pods.go:89] "csi-hostpath-resizer-0" [60ca0fd7-d65f-4c71-941e-82f8ec090187] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 09:47:59.567719  262001 system_pods.go:89] "csi-hostpathplugin-4jzcx" [bf0f0faf-4e29-4ab0-ba77-2e407b834b53] Pending
	I1025 09:47:59.567730  262001 system_pods.go:89] "etcd-addons-184548" [32a1291f-4f04-44e5-847e-ecfe895fb1c3] Running
	I1025 09:47:59.567736  262001 system_pods.go:89] "kindnet-dn6n8" [3c3ad1a4-a426-4593-b520-3ddacbbcedbb] Running
	I1025 09:47:59.567748  262001 system_pods.go:89] "kube-apiserver-addons-184548" [bf7b7404-bdc0-4bde-bc5e-78b6c046427b] Running
	I1025 09:47:59.567757  262001 system_pods.go:89] "kube-controller-manager-addons-184548" [b201b806-15b2-4190-8310-a15384162b02] Running
	I1025 09:47:59.567771  262001 system_pods.go:89] "kube-ingress-dns-minikube" [065b8ebc-6b4a-4aef-81e0-455096dd9765] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:47:59.567780  262001 system_pods.go:89] "kube-proxy-clv7b" [98d8d78b-4b77-49e2-b0bb-0ffc37ef6b7d] Running
	I1025 09:47:59.567789  262001 system_pods.go:89] "kube-scheduler-addons-184548" [384e707f-4d85-4826-965d-60352bad843a] Running
	I1025 09:47:59.567800  262001 system_pods.go:89] "metrics-server-85b7d694d7-5mbb4" [473fd1dc-bcd8-4299-ae17-9e3bca061756] Pending
	I1025 09:47:59.567823  262001 system_pods.go:89] "nvidia-device-plugin-daemonset-7sktv" [d6a26aea-18d0-46d2-a809-bf7ec95759f6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 09:47:59.567838  262001 system_pods.go:89] "registry-6b586f9694-cft48" [3b3f9d6f-cbd8-4b92-987f-b61c282e6860] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:47:59.567845  262001 system_pods.go:89] "registry-creds-764b6fb674-dk8fg" [5d6b936c-c964-41cd-a147-a05337379ebc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:47:59.567857  262001 system_pods.go:89] "registry-proxy-l4vs6" [2d113e11-a239-4418-8da7-40a53e33fd75] Pending
	I1025 09:47:59.567864  262001 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2bqhf" [d754d289-7438-4581-aec3-5f6119282c1f] Pending
	I1025 09:47:59.567872  262001 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rlnlm" [a9f57908-6cfb-4029-b95f-56dcc3188ca2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:47:59.567882  262001 system_pods.go:89] "storage-provisioner" [6cfd6874-b84e-4d5a-8a54-04f7e12dbfcb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:47:59.567901  262001 retry.go:31] will retry after 243.206318ms: missing components: kube-dns
	I1025 09:47:59.659533  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:59.758096  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:59.827085  262001 system_pods.go:86] 19 kube-system pods found
	I1025 09:47:59.827126  262001 system_pods.go:89] "coredns-66bc5c9577-hq8d8" [5f9e2449-9a59-40bb-9e50-c090419fd504] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:47:59.827145  262001 system_pods.go:89] "csi-hostpath-attacher-0" [044ac472-ff62-437c-ba08-8aa8f5d30315] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:47:59.827159  262001 system_pods.go:89] "csi-hostpath-resizer-0" [60ca0fd7-d65f-4c71-941e-82f8ec090187] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 09:47:59.827169  262001 system_pods.go:89] "csi-hostpathplugin-4jzcx" [bf0f0faf-4e29-4ab0-ba77-2e407b834b53] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 09:47:59.827188  262001 system_pods.go:89] "etcd-addons-184548" [32a1291f-4f04-44e5-847e-ecfe895fb1c3] Running
	I1025 09:47:59.827199  262001 system_pods.go:89] "kindnet-dn6n8" [3c3ad1a4-a426-4593-b520-3ddacbbcedbb] Running
	I1025 09:47:59.827204  262001 system_pods.go:89] "kube-apiserver-addons-184548" [bf7b7404-bdc0-4bde-bc5e-78b6c046427b] Running
	I1025 09:47:59.827209  262001 system_pods.go:89] "kube-controller-manager-addons-184548" [b201b806-15b2-4190-8310-a15384162b02] Running
	I1025 09:47:59.827232  262001 system_pods.go:89] "kube-ingress-dns-minikube" [065b8ebc-6b4a-4aef-81e0-455096dd9765] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:47:59.827237  262001 system_pods.go:89] "kube-proxy-clv7b" [98d8d78b-4b77-49e2-b0bb-0ffc37ef6b7d] Running
	I1025 09:47:59.827245  262001 system_pods.go:89] "kube-scheduler-addons-184548" [384e707f-4d85-4826-965d-60352bad843a] Running
	I1025 09:47:59.827258  262001 system_pods.go:89] "metrics-server-85b7d694d7-5mbb4" [473fd1dc-bcd8-4299-ae17-9e3bca061756] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:47:59.827268  262001 system_pods.go:89] "nvidia-device-plugin-daemonset-7sktv" [d6a26aea-18d0-46d2-a809-bf7ec95759f6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 09:47:59.827279  262001 system_pods.go:89] "registry-6b586f9694-cft48" [3b3f9d6f-cbd8-4b92-987f-b61c282e6860] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:47:59.827288  262001 system_pods.go:89] "registry-creds-764b6fb674-dk8fg" [5d6b936c-c964-41cd-a147-a05337379ebc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:47:59.827314  262001 system_pods.go:89] "registry-proxy-l4vs6" [2d113e11-a239-4418-8da7-40a53e33fd75] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 09:47:59.827321  262001 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2bqhf" [d754d289-7438-4581-aec3-5f6119282c1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:47:59.827335  262001 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rlnlm" [a9f57908-6cfb-4029-b95f-56dcc3188ca2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:47:59.827341  262001 system_pods.go:89] "storage-provisioner" [6cfd6874-b84e-4d5a-8a54-04f7e12dbfcb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:47:59.827363  262001 retry.go:31] will retry after 389.232968ms: missing components: kube-dns
	I1025 09:47:59.931395  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:59.931568  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:00.117815  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:00.240063  262001 system_pods.go:86] 19 kube-system pods found
	I1025 09:48:00.241234  262001 system_pods.go:89] "coredns-66bc5c9577-hq8d8" [5f9e2449-9a59-40bb-9e50-c090419fd504] Running
	I1025 09:48:00.241325  262001 system_pods.go:89] "csi-hostpath-attacher-0" [044ac472-ff62-437c-ba08-8aa8f5d30315] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:48:00.241356  262001 system_pods.go:89] "csi-hostpath-resizer-0" [60ca0fd7-d65f-4c71-941e-82f8ec090187] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 09:48:00.241400  262001 system_pods.go:89] "csi-hostpathplugin-4jzcx" [bf0f0faf-4e29-4ab0-ba77-2e407b834b53] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 09:48:00.241429  262001 system_pods.go:89] "etcd-addons-184548" [32a1291f-4f04-44e5-847e-ecfe895fb1c3] Running
	I1025 09:48:00.241456  262001 system_pods.go:89] "kindnet-dn6n8" [3c3ad1a4-a426-4593-b520-3ddacbbcedbb] Running
	I1025 09:48:00.241486  262001 system_pods.go:89] "kube-apiserver-addons-184548" [bf7b7404-bdc0-4bde-bc5e-78b6c046427b] Running
	I1025 09:48:00.241518  262001 system_pods.go:89] "kube-controller-manager-addons-184548" [b201b806-15b2-4190-8310-a15384162b02] Running
	I1025 09:48:00.241553  262001 system_pods.go:89] "kube-ingress-dns-minikube" [065b8ebc-6b4a-4aef-81e0-455096dd9765] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:48:00.241578  262001 system_pods.go:89] "kube-proxy-clv7b" [98d8d78b-4b77-49e2-b0bb-0ffc37ef6b7d] Running
	I1025 09:48:00.241606  262001 system_pods.go:89] "kube-scheduler-addons-184548" [384e707f-4d85-4826-965d-60352bad843a] Running
	I1025 09:48:00.241636  262001 system_pods.go:89] "metrics-server-85b7d694d7-5mbb4" [473fd1dc-bcd8-4299-ae17-9e3bca061756] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:48:00.241667  262001 system_pods.go:89] "nvidia-device-plugin-daemonset-7sktv" [d6a26aea-18d0-46d2-a809-bf7ec95759f6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 09:48:00.241700  262001 system_pods.go:89] "registry-6b586f9694-cft48" [3b3f9d6f-cbd8-4b92-987f-b61c282e6860] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:48:00.241731  262001 system_pods.go:89] "registry-creds-764b6fb674-dk8fg" [5d6b936c-c964-41cd-a147-a05337379ebc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:48:00.241763  262001 system_pods.go:89] "registry-proxy-l4vs6" [2d113e11-a239-4418-8da7-40a53e33fd75] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 09:48:00.241800  262001 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2bqhf" [d754d289-7438-4581-aec3-5f6119282c1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:48:00.241835  262001 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rlnlm" [a9f57908-6cfb-4029-b95f-56dcc3188ca2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:48:00.241879  262001 system_pods.go:89] "storage-provisioner" [6cfd6874-b84e-4d5a-8a54-04f7e12dbfcb] Running
	I1025 09:48:00.241913  262001 system_pods.go:126] duration metric: took 1.008403877s to wait for k8s-apps to be running ...
	I1025 09:48:00.243419  262001 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:48:00.243585  262001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:48:00.315628  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:00.331162  262001 system_svc.go:56] duration metric: took 87.747408ms WaitForService to wait for kubelet
	I1025 09:48:00.331202  262001 kubeadm.go:586] duration metric: took 43.870258278s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:48:00.331225  262001 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:48:00.348566  262001 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 09:48:00.348672  262001 node_conditions.go:123] node cpu capacity is 2
	I1025 09:48:00.348715  262001 node_conditions.go:105] duration metric: took 17.478775ms to run NodePressure ...
	I1025 09:48:00.348768  262001 start.go:241] waiting for startup goroutines ...
	I1025 09:48:00.432415  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:00.432455  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:00.616168  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:00.747301  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:00.927256  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:00.927701  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:01.113079  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:01.247223  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:01.427807  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:01.427897  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:01.613213  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:01.747353  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:01.927923  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:01.928451  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:02.113788  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:02.247851  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:02.427645  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:02.428674  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:02.613224  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:02.746880  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:02.926695  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:02.927264  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:03.113556  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:03.247204  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:03.427516  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:03.428243  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:03.613049  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:03.747564  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:03.925930  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:03.926315  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:04.113181  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:04.246223  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:04.426127  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:04.426338  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:04.613186  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:04.747280  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:04.785417  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:48:04.925937  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:04.926332  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:05.113977  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:05.246550  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:05.425976  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:05.432470  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:48:05.608650  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:48:05.608685  262001 retry.go:31] will retry after 15.258673863s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:48:05.612685  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:05.746807  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:05.925380  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:05.925558  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:06.114111  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:06.247210  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:06.425588  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:06.426257  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:06.613715  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:06.749134  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:06.926858  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:06.927043  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:07.113480  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:07.247683  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:07.425002  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:07.425470  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:07.612450  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:07.746677  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:07.925335  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:07.925742  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:08.112803  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:08.247152  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:08.425516  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:08.425816  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:08.612851  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:08.747337  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:08.925877  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:08.926142  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:09.113127  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:09.246162  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:09.425780  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:09.426250  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:09.613371  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:09.746375  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:09.925967  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:09.926238  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:10.112949  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:10.246984  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:10.424972  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:10.425461  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:10.613129  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:10.748377  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:10.926105  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:10.926773  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:11.114544  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:11.247311  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:11.425197  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:11.425433  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:11.613773  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:11.746852  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:11.926324  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:11.926473  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:12.113200  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:12.247516  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:12.425354  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:12.425776  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:12.612419  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:12.747644  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:12.926831  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:12.927268  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:13.115030  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:13.247826  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:13.427110  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:13.427579  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:13.613323  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:13.746804  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:13.926334  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:13.926486  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:14.113550  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:14.246531  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:14.425311  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:14.425768  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:14.612641  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:14.746828  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:14.925915  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:14.926071  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:15.113402  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:15.247034  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:15.426259  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:15.427828  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:15.613081  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:15.747037  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:15.926695  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:15.926825  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:16.112697  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:16.246729  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:16.425848  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:16.425944  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:16.612891  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:16.747141  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:16.926406  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:16.926683  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:17.113151  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:17.246058  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:17.426021  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:17.426805  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:17.613589  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:17.746912  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:17.926994  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:17.927772  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:18.113156  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:18.246894  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:18.426309  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:18.427019  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:18.613306  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:18.746336  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:18.926881  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:18.927120  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:19.113466  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:19.247457  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:19.425971  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:19.426278  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:19.626929  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:19.748228  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:19.926696  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:19.927029  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:20.113977  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:20.247921  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:20.426710  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:20.426935  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:20.612959  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:20.748664  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:20.868040  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:48:20.927075  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:20.927496  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:21.113672  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:21.247124  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:21.428720  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:21.429255  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:21.614155  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:21.746534  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:21.926415  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:21.926598  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:22.113379  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:22.247267  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:22.390227  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.522146782s)
	W1025 09:48:22.390264  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:48:22.390285  262001 retry.go:31] will retry after 18.197966339s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:48:22.427397  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:22.427800  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:22.612931  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:22.747203  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:22.925977  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:22.926145  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:23.113347  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:23.247571  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:23.426417  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:23.426518  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:23.613926  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:23.747800  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:23.925156  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:23.926164  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:24.114276  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:24.246668  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:24.429757  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:24.430900  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:24.614226  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:24.746440  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:24.924953  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:24.925055  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:25.119282  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:25.246745  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:25.426040  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:25.427219  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:25.614114  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:25.749575  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:25.925042  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:25.925530  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:26.112512  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:26.258482  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:26.427018  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:26.427385  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:26.613865  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:26.748794  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:26.925554  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:26.925650  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:27.114806  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:27.247449  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:27.425177  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:27.425352  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:27.614014  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:27.747783  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:27.926043  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:27.926298  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:28.117363  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:28.251729  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:28.425746  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:28.426072  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:28.612740  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:28.747563  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:28.925520  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:28.925709  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:29.127027  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:29.251915  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:29.427423  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:29.427825  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:29.614037  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:29.746996  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:29.926847  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:29.926923  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:30.114675  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:30.257179  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:30.428692  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:30.429209  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:30.615623  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:30.751171  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:30.931224  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:30.931329  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:31.115528  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:31.255456  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:31.428765  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:31.428841  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:31.656366  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:31.760792  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:31.929168  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:31.929964  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:32.113021  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:32.252943  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:32.426950  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:32.427096  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:32.612966  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:32.747603  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:32.925304  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:32.925766  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:33.112687  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:33.247137  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:33.425744  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:33.425914  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:33.612784  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:33.748790  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:33.926844  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:33.927189  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:34.113381  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:34.247469  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:34.426622  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:34.427830  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:34.614464  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:34.747258  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:34.926642  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:34.927134  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:35.113409  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:35.246988  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:35.425269  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:35.425371  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:35.613363  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:35.748805  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:35.925817  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:35.926705  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:36.112328  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:36.246761  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:36.426384  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:36.426579  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:36.612669  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:36.746918  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:36.926242  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:36.926896  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:37.115570  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:37.247156  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:37.426014  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:37.426414  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:37.613232  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:37.747587  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:37.925351  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:37.925434  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:38.113263  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:38.247798  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:38.431149  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:38.431560  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:38.612914  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:38.747174  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:38.926288  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:38.926468  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:39.112553  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:39.247226  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:39.425826  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:39.426153  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:39.613349  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:39.746998  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:39.926170  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:39.927097  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:40.113834  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:40.247250  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:40.426501  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:40.427171  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:40.589430  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:48:40.613318  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:40.746894  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:40.926498  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:40.926746  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:41.116344  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:41.247181  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:41.428133  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:41.428503  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:41.612937  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:41.793819  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:41.928589  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:41.928924  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:42.115502  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:42.123450  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.533977391s)
	W1025 09:48:42.123521  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:48:42.123640  262001 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1025 09:48:42.248796  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:42.424978  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:42.425133  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:42.613865  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:42.747748  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:42.927187  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:42.928789  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:43.124296  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:43.247091  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:43.427564  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:43.427965  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:43.613556  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:43.749010  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:43.927189  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:43.927599  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:44.113496  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:44.247981  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:44.426732  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:44.426902  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:44.613191  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:44.746916  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:44.929748  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:44.929903  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:45.114188  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:45.247219  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:45.430110  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:45.430378  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:45.614094  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:45.747426  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:45.928277  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:45.928810  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:46.112703  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:46.247092  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:46.426165  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:46.426942  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:46.613255  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:46.747191  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:46.926861  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:46.927113  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:47.113354  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:47.247020  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:47.426567  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:47.426830  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:47.613073  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:47.747229  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:47.926534  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:47.926796  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:48.113309  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:48.247243  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:48.426436  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:48.426807  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:48.612653  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:48.751278  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:48.925635  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:48.926500  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:49.113307  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:49.246747  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:49.426587  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:49.427041  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:49.613401  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:49.748410  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:49.928642  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:49.928793  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:50.112830  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:50.246928  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:50.427226  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:50.427667  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:50.612473  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:50.746920  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:50.925006  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:50.927354  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:51.113710  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:51.258274  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:51.426371  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:51.426457  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:51.613576  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:51.746860  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:51.927026  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:51.927208  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:52.113949  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:52.247611  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:52.427631  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:52.428154  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:52.612894  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:52.749184  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:52.925955  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:52.926530  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:53.112650  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:53.248150  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:53.430017  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:53.430227  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:53.613287  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:53.746615  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:53.925595  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:53.925695  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:54.113043  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:54.247029  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:54.426068  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:54.425886  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:54.612995  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:54.747940  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:54.925600  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:54.925769  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:55.112588  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:55.246879  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:55.427132  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:55.427962  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:55.613342  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:55.747211  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:55.926392  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:55.926992  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:56.113159  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:56.246590  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:56.426548  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:56.427849  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:56.614613  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:56.747484  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:56.926514  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:56.926969  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:57.113849  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:57.247309  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:57.427289  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:57.428066  262001 kapi.go:107] duration metric: took 1m34.506487077s to wait for kubernetes.io/minikube-addons=registry ...
	I1025 09:48:57.613188  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:57.748422  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:57.926661  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:58.113170  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:58.246468  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:58.424965  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:58.613123  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:58.746078  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:58.925068  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:59.113069  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:59.247648  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:59.425765  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:59.612875  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:59.747357  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:59.929479  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:00.118257  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:00.248376  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:00.428716  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:00.613664  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:00.746676  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:00.927049  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:01.113713  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:01.249174  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:01.428675  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:01.613389  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:01.746951  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:01.929661  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:02.113382  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:02.248624  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:02.426195  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:02.614243  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:02.765489  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:02.939702  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:03.115551  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:03.250798  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:03.425879  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:03.616676  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:03.747755  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:03.925224  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:04.115876  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:04.247644  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:04.427501  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:04.612598  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:04.748165  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:04.926157  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:05.113142  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:05.247501  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:05.426896  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:05.612723  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:05.746776  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:05.925738  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:06.113665  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:06.247368  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:06.426060  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:06.613376  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:06.752837  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:06.925404  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:07.113978  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:07.247838  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:07.425221  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:07.613525  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:07.746789  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:07.925738  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:08.112981  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:08.248938  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:08.440245  262001 kapi.go:107] duration metric: took 1m45.51861226s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1025 09:49:08.613575  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:08.746862  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:09.113048  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:09.247243  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:09.613744  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:09.747322  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:10.114022  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:10.247472  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:10.613422  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:10.746872  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:11.113742  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:11.247687  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:11.612875  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:11.751830  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:12.115447  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:12.250182  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:12.613412  262001 kapi.go:107] duration metric: took 1m46.003988966s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1025 09:49:12.617620  262001 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-184548 cluster.
	I1025 09:49:12.621169  262001 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1025 09:49:12.624570  262001 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1025 09:49:12.746660  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:13.247126  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:13.746777  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:14.248335  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:14.747003  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:15.246801  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:15.746909  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:16.246829  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:16.747542  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:17.247921  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:17.747040  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:18.247565  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:18.747701  262001 kapi.go:107] duration metric: took 1m55.504419152s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1025 09:49:18.750741  262001 out.go:179] * Enabled addons: nvidia-device-plugin, storage-provisioner, registry-creds, cloud-spanner, amd-gpu-device-plugin, ingress-dns, default-storageclass, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1025 09:49:18.753713  262001 addons.go:514] duration metric: took 2m2.292256316s for enable addons: enabled=[nvidia-device-plugin storage-provisioner registry-creds cloud-spanner amd-gpu-device-plugin ingress-dns default-storageclass metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1025 09:49:18.753764  262001 start.go:246] waiting for cluster config update ...
	I1025 09:49:18.753787  262001 start.go:255] writing updated cluster config ...
	I1025 09:49:18.754139  262001 ssh_runner.go:195] Run: rm -f paused
	I1025 09:49:18.758757  262001 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:49:18.762396  262001 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hq8d8" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:49:18.769490  262001 pod_ready.go:94] pod "coredns-66bc5c9577-hq8d8" is "Ready"
	I1025 09:49:18.769518  262001 pod_ready.go:86] duration metric: took 7.095622ms for pod "coredns-66bc5c9577-hq8d8" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:49:18.773299  262001 pod_ready.go:83] waiting for pod "etcd-addons-184548" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:49:18.778439  262001 pod_ready.go:94] pod "etcd-addons-184548" is "Ready"
	I1025 09:49:18.778468  262001 pod_ready.go:86] duration metric: took 5.143177ms for pod "etcd-addons-184548" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:49:18.780883  262001 pod_ready.go:83] waiting for pod "kube-apiserver-addons-184548" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:49:18.786105  262001 pod_ready.go:94] pod "kube-apiserver-addons-184548" is "Ready"
	I1025 09:49:18.786139  262001 pod_ready.go:86] duration metric: took 5.22961ms for pod "kube-apiserver-addons-184548" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:49:18.788722  262001 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-184548" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:49:19.163299  262001 pod_ready.go:94] pod "kube-controller-manager-addons-184548" is "Ready"
	I1025 09:49:19.163331  262001 pod_ready.go:86] duration metric: took 374.58361ms for pod "kube-controller-manager-addons-184548" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:49:19.363033  262001 pod_ready.go:83] waiting for pod "kube-proxy-clv7b" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:49:19.762585  262001 pod_ready.go:94] pod "kube-proxy-clv7b" is "Ready"
	I1025 09:49:19.762617  262001 pod_ready.go:86] duration metric: took 399.557695ms for pod "kube-proxy-clv7b" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:49:19.979389  262001 pod_ready.go:83] waiting for pod "kube-scheduler-addons-184548" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:49:20.363300  262001 pod_ready.go:94] pod "kube-scheduler-addons-184548" is "Ready"
	I1025 09:49:20.363329  262001 pod_ready.go:86] duration metric: took 383.902094ms for pod "kube-scheduler-addons-184548" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:49:20.363341  262001 pod_ready.go:40] duration metric: took 1.60454694s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:49:20.433751  262001 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 09:49:20.439743  262001 out.go:179] * Done! kubectl is now configured to use "addons-184548" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 09:52:26 addons-184548 crio[826]: time="2025-10-25T09:52:26.252286128Z" level=info msg="Removed container c3e2f44a0c594538e283408f7b32dbb98c04a1ffa3e9eb8bc2568c576af1c191: kube-system/registry-creds-764b6fb674-dk8fg/registry-creds" id=66ac6272-cc58-4887-9405-9df56f4e82db name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:52:28 addons-184548 crio[826]: time="2025-10-25T09:52:28.191684152Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-qh4wk/POD" id=6b6abb47-8820-447f-bf68-69196e55ed3a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:52:28 addons-184548 crio[826]: time="2025-10-25T09:52:28.1917526Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:52:28 addons-184548 crio[826]: time="2025-10-25T09:52:28.204978366Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-qh4wk Namespace:default ID:37f03e364ba5a63b6226fd043902e833a08acd179773444c9b3490e2a1a2bc3f UID:3bcdcdee-2d81-41e4-8e41-26df18b7a9a7 NetNS:/var/run/netns/dc2d185d-28f2-4e27-8127-64f316a3189c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001cb55c0}] Aliases:map[]}"
	Oct 25 09:52:28 addons-184548 crio[826]: time="2025-10-25T09:52:28.205193178Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-qh4wk to CNI network \"kindnet\" (type=ptp)"
	Oct 25 09:52:28 addons-184548 crio[826]: time="2025-10-25T09:52:28.226488448Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-qh4wk Namespace:default ID:37f03e364ba5a63b6226fd043902e833a08acd179773444c9b3490e2a1a2bc3f UID:3bcdcdee-2d81-41e4-8e41-26df18b7a9a7 NetNS:/var/run/netns/dc2d185d-28f2-4e27-8127-64f316a3189c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001cb55c0}] Aliases:map[]}"
	Oct 25 09:52:28 addons-184548 crio[826]: time="2025-10-25T09:52:28.226660175Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-qh4wk for CNI network kindnet (type=ptp)"
	Oct 25 09:52:28 addons-184548 crio[826]: time="2025-10-25T09:52:28.233599321Z" level=info msg="Ran pod sandbox 37f03e364ba5a63b6226fd043902e833a08acd179773444c9b3490e2a1a2bc3f with infra container: default/hello-world-app-5d498dc89-qh4wk/POD" id=6b6abb47-8820-447f-bf68-69196e55ed3a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:52:28 addons-184548 crio[826]: time="2025-10-25T09:52:28.240633648Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=13f83409-14d8-43e8-8536-87227998fd87 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:52:28 addons-184548 crio[826]: time="2025-10-25T09:52:28.240979333Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=13f83409-14d8-43e8-8536-87227998fd87 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:52:28 addons-184548 crio[826]: time="2025-10-25T09:52:28.241112463Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=13f83409-14d8-43e8-8536-87227998fd87 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:52:28 addons-184548 crio[826]: time="2025-10-25T09:52:28.242214405Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=db6e2cc3-0d8b-4f2d-acf3-52e0fdae2279 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:52:28 addons-184548 crio[826]: time="2025-10-25T09:52:28.244680699Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 25 09:52:28 addons-184548 crio[826]: time="2025-10-25T09:52:28.816954282Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=db6e2cc3-0d8b-4f2d-acf3-52e0fdae2279 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:52:28 addons-184548 crio[826]: time="2025-10-25T09:52:28.817641959Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=5022614f-1487-4bdd-a513-7c4f463a2c5b name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:52:28 addons-184548 crio[826]: time="2025-10-25T09:52:28.822041137Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=f2b3f93b-31f3-4a39-94fb-c006a4b8c45e name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:52:28 addons-184548 crio[826]: time="2025-10-25T09:52:28.83562575Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-qh4wk/hello-world-app" id=3cbb25c6-ec2a-491d-8b99-8d5245c69013 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:52:28 addons-184548 crio[826]: time="2025-10-25T09:52:28.835784799Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:52:28 addons-184548 crio[826]: time="2025-10-25T09:52:28.851645534Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:52:28 addons-184548 crio[826]: time="2025-10-25T09:52:28.851881902Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/33211d9a700ff76988eb5f6e0b8a3773b82cd1765de23715dee3327273fa3b14/merged/etc/passwd: no such file or directory"
	Oct 25 09:52:28 addons-184548 crio[826]: time="2025-10-25T09:52:28.85191318Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/33211d9a700ff76988eb5f6e0b8a3773b82cd1765de23715dee3327273fa3b14/merged/etc/group: no such file or directory"
	Oct 25 09:52:28 addons-184548 crio[826]: time="2025-10-25T09:52:28.852235964Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:52:28 addons-184548 crio[826]: time="2025-10-25T09:52:28.874752644Z" level=info msg="Created container 079b68bc29632524ad4490e676c4d44ff73abb96a9de21bb017f6dbc4345cbda: default/hello-world-app-5d498dc89-qh4wk/hello-world-app" id=3cbb25c6-ec2a-491d-8b99-8d5245c69013 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:52:28 addons-184548 crio[826]: time="2025-10-25T09:52:28.878761032Z" level=info msg="Starting container: 079b68bc29632524ad4490e676c4d44ff73abb96a9de21bb017f6dbc4345cbda" id=c866e3f4-2e92-4b8a-bc7e-d45bb06b6b72 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:52:28 addons-184548 crio[826]: time="2025-10-25T09:52:28.885708179Z" level=info msg="Started container" PID=7207 containerID=079b68bc29632524ad4490e676c4d44ff73abb96a9de21bb017f6dbc4345cbda description=default/hello-world-app-5d498dc89-qh4wk/hello-world-app id=c866e3f4-2e92-4b8a-bc7e-d45bb06b6b72 name=/runtime.v1.RuntimeService/StartContainer sandboxID=37f03e364ba5a63b6226fd043902e833a08acd179773444c9b3490e2a1a2bc3f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	079b68bc29632       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   37f03e364ba5a       hello-world-app-5d498dc89-qh4wk             default
	deedae6e53938       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             4 seconds ago            Exited              registry-creds                           2                   8272d15cb45a5       registry-creds-764b6fb674-dk8fg             kube-system
	a242f18216e26       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0                                              2 minutes ago            Running             nginx                                    0                   5019218e7a387       nginx                                       default
	cb5ecb8477046       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   2641306d8a1cc       busybox                                     default
	99ffce70564e9       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   c8cf2b2a99b36       csi-hostpathplugin-4jzcx                    kube-system
	87380913bd0ee       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   c8cf2b2a99b36       csi-hostpathplugin-4jzcx                    kube-system
	917acafd89879       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   c8cf2b2a99b36       csi-hostpathplugin-4jzcx                    kube-system
	1511789ef92ec       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   c8cf2b2a99b36       csi-hostpathplugin-4jzcx                    kube-system
	c854ff59a87cb       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   62cba53c1d7cf       gcp-auth-78565c9fb4-ljkhx                   gcp-auth
	fe07be820f29e       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   c8cf2b2a99b36       csi-hostpathplugin-4jzcx                    kube-system
	eecab95d02e84       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago            Running             controller                               0                   a999dce303784       ingress-nginx-controller-675c5ddd98-kn8cf   ingress-nginx
	d7eb5ef695f14       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago            Running             gadget                                   0                   62f1638fd598f       gadget-wc7b2                                gadget
	353ca1bd95855       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   6063dc7ae5ad8       registry-proxy-l4vs6                        kube-system
	5c51c82f38056       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   cf0c5231b71f4       nvidia-device-plugin-daemonset-7sktv        kube-system
	c1f4c2cd01a80       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   00c41a7e28d97       yakd-dashboard-5ff678cb9-kjxsl              yakd-dashboard
	25d99e542d8e7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago            Exited              patch                                    0                   980d0fe304064       ingress-nginx-admission-patch-cl6qb         ingress-nginx
	d4fb23cf89dec       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   c8cf2b2a99b36       csi-hostpathplugin-4jzcx                    kube-system
	6064a19490c79       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   6e77312721790       csi-hostpath-resizer-0                      kube-system
	7c76fe020b3e5       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   dcb56990daf90       csi-hostpath-attacher-0                     kube-system
	7e5a14fca747b       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   f3e5086272c34       metrics-server-85b7d694d7-5mbb4             kube-system
	d554dffae9bef       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago            Running             local-path-provisioner                   0                   16f10c65efc8d       local-path-provisioner-648f6765c9-nv5k2     local-path-storage
	9e7e539bbba98       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   d7bce8529595d       snapshot-controller-7d9fbc56b8-rlnlm        kube-system
	8697134e7e473       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   4 minutes ago            Exited              create                                   0                   915f3eea07fad       ingress-nginx-admission-create-bmfm4        ingress-nginx
	0a744f4822b1c       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   a58c40e3d670d       snapshot-controller-7d9fbc56b8-2bqhf        kube-system
	97501aeea8896       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           4 minutes ago            Running             registry                                 0                   17b8c7bda4bc6       registry-6b586f9694-cft48                   kube-system
	fb43fde6f7081       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               4 minutes ago            Running             minikube-ingress-dns                     0                   55c093c8fb7c9       kube-ingress-dns-minikube                   kube-system
	cb279419abc6e       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               4 minutes ago            Running             cloud-spanner-emulator                   0                   db9b35ce5f1fd       cloud-spanner-emulator-86bd5cbb97-wv5rr     default
	01b285afd5854       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   d57e84c4d76af       coredns-66bc5c9577-hq8d8                    kube-system
	6dea6b6abd8b2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   6512c98e5c3b1       storage-provisioner                         kube-system
	ea0a2c59127ed       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             5 minutes ago            Running             kube-proxy                               0                   c4e73032c3bac       kube-proxy-clv7b                            kube-system
	7c269e9ecf5ba       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             5 minutes ago            Running             kindnet-cni                              0                   462c9337ebadf       kindnet-dn6n8                               kube-system
	a9c067b0e9c58       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   cfac145b13a4e       kube-scheduler-addons-184548                kube-system
	50b3905935f0c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   2279cb9db3497       kube-controller-manager-addons-184548       kube-system
	fc1be8cbffe43       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   6884eb0b93e43       kube-apiserver-addons-184548                kube-system
	703663e8a09cc       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   dd8e387d48f11       etcd-addons-184548                          kube-system
	
	
	==> coredns [01b285afd58548299d42eb8fb367c019acd34eedf67eecee676c84578ed69e6f] <==
	[INFO] 10.244.0.15:59347 - 49194 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002466246s
	[INFO] 10.244.0.15:59347 - 13041 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000151903s
	[INFO] 10.244.0.15:59347 - 37803 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000245081s
	[INFO] 10.244.0.15:49398 - 62277 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000159411s
	[INFO] 10.244.0.15:49398 - 62040 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000088001s
	[INFO] 10.244.0.15:47620 - 36644 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00010332s
	[INFO] 10.244.0.15:47620 - 36447 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000073379s
	[INFO] 10.244.0.15:42826 - 8681 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000085884s
	[INFO] 10.244.0.15:42826 - 8228 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000214377s
	[INFO] 10.244.0.15:39379 - 15507 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001713583s
	[INFO] 10.244.0.15:39379 - 15687 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001836874s
	[INFO] 10.244.0.15:39287 - 23563 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000141966s
	[INFO] 10.244.0.15:39287 - 23415 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000230435s
	[INFO] 10.244.0.21:45010 - 18529 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000199435s
	[INFO] 10.244.0.21:49727 - 64868 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000145248s
	[INFO] 10.244.0.21:54291 - 38536 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000189212s
	[INFO] 10.244.0.21:55103 - 46761 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000181072s
	[INFO] 10.244.0.21:43650 - 11242 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00017482s
	[INFO] 10.244.0.21:43503 - 51774 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000138652s
	[INFO] 10.244.0.21:57499 - 50078 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004108066s
	[INFO] 10.244.0.21:46437 - 46495 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004284461s
	[INFO] 10.244.0.21:33416 - 62670 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001695515s
	[INFO] 10.244.0.21:39424 - 393 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001080535s
	[INFO] 10.244.0.23:60892 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000147792s
	[INFO] 10.244.0.23:42473 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000178816s
	
	
	==> describe nodes <==
	Name:               addons-184548
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-184548
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=addons-184548
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_47_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-184548
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-184548"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:47:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-184548
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:52:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:52:27 +0000   Sat, 25 Oct 2025 09:47:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:52:27 +0000   Sat, 25 Oct 2025 09:47:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:52:27 +0000   Sat, 25 Oct 2025 09:47:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:52:27 +0000   Sat, 25 Oct 2025 09:47:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-184548
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                ba66d0db-65f5-42cb-b217-b8f2184e05a9
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     cloud-spanner-emulator-86bd5cbb97-wv5rr      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	  default                     hello-world-app-5d498dc89-qh4wk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  gadget                      gadget-wc7b2                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  gcp-auth                    gcp-auth-78565c9fb4-ljkhx                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-kn8cf    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m7s
	  kube-system                 coredns-66bc5c9577-hq8d8                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m12s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 csi-hostpathplugin-4jzcx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 etcd-addons-184548                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m17s
	  kube-system                 kindnet-dn6n8                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m12s
	  kube-system                 kube-apiserver-addons-184548                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 kube-controller-manager-addons-184548        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-proxy-clv7b                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 kube-scheduler-addons-184548                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 metrics-server-85b7d694d7-5mbb4              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m8s
	  kube-system                 nvidia-device-plugin-daemonset-7sktv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 registry-6b586f9694-cft48                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 registry-creds-764b6fb674-dk8fg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 registry-proxy-l4vs6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 snapshot-controller-7d9fbc56b8-2bqhf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 snapshot-controller-7d9fbc56b8-rlnlm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  local-path-storage          local-path-provisioner-648f6765c9-nv5k2      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-kjxsl               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m11s                  kube-proxy       
	  Warning  CgroupV1                 5m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m25s (x8 over 5m25s)  kubelet          Node addons-184548 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m25s (x8 over 5m25s)  kubelet          Node addons-184548 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m25s (x8 over 5m25s)  kubelet          Node addons-184548 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m18s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m18s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m18s                  kubelet          Node addons-184548 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m18s                  kubelet          Node addons-184548 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m18s                  kubelet          Node addons-184548 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m13s                  node-controller  Node addons-184548 event: Registered Node addons-184548 in Controller
	  Normal   NodeReady                4m31s                  kubelet          Node addons-184548 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct25 09:23] overlayfs: idmapped layers are currently not supported
	[Oct25 09:24] overlayfs: idmapped layers are currently not supported
	[Oct25 09:28] overlayfs: idmapped layers are currently not supported
	[ +37.283444] overlayfs: idmapped layers are currently not supported
	[Oct25 09:29] overlayfs: idmapped layers are currently not supported
	[ +38.328802] overlayfs: idmapped layers are currently not supported
	[Oct25 09:30] overlayfs: idmapped layers are currently not supported
	[Oct25 09:31] overlayfs: idmapped layers are currently not supported
	[Oct25 09:32] overlayfs: idmapped layers are currently not supported
	[Oct25 09:33] overlayfs: idmapped layers are currently not supported
	[Oct25 09:34] overlayfs: idmapped layers are currently not supported
	[ +28.289706] overlayfs: idmapped layers are currently not supported
	[Oct25 09:35] overlayfs: idmapped layers are currently not supported
	[Oct25 09:36] overlayfs: idmapped layers are currently not supported
	[ +24.160248] overlayfs: idmapped layers are currently not supported
	[Oct25 09:37] overlayfs: idmapped layers are currently not supported
	[  +8.216028] overlayfs: idmapped layers are currently not supported
	[Oct25 09:38] overlayfs: idmapped layers are currently not supported
	[Oct25 09:39] overlayfs: idmapped layers are currently not supported
	[Oct25 09:41] overlayfs: idmapped layers are currently not supported
	[ +14.126672] overlayfs: idmapped layers are currently not supported
	[Oct25 09:42] overlayfs: idmapped layers are currently not supported
	[Oct25 09:43] overlayfs: idmapped layers are currently not supported
	[Oct25 09:45] kauditd_printk_skb: 8 callbacks suppressed
	[Oct25 09:47] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [703663e8a09cce37f7cec1e00ab2f874bfe4b4e49042f73b80b547840b775ef2] <==
	{"level":"warn","ts":"2025-10-25T09:47:07.523690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.552971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.581617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.605803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.643184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.669005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.715431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.730738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.758858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.786818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.804679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.830238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.855994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.891887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.924971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.948379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.980769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:08.005593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:08.122727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:23.674159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:23.685708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:45.997630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:46.015694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:46.078033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:46.092191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54402","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [c854ff59a87cb31257a3fa6b2393f211d9391c2200b70a5dce42efd5a674150a] <==
	2025/10/25 09:49:12 GCP Auth Webhook started!
	2025/10/25 09:49:20 Ready to marshal response ...
	2025/10/25 09:49:20 Ready to write response ...
	2025/10/25 09:49:21 Ready to marshal response ...
	2025/10/25 09:49:21 Ready to write response ...
	2025/10/25 09:49:21 Ready to marshal response ...
	2025/10/25 09:49:21 Ready to write response ...
	2025/10/25 09:49:42 Ready to marshal response ...
	2025/10/25 09:49:42 Ready to write response ...
	2025/10/25 09:49:43 Ready to marshal response ...
	2025/10/25 09:49:43 Ready to write response ...
	2025/10/25 09:49:43 Ready to marshal response ...
	2025/10/25 09:49:43 Ready to write response ...
	2025/10/25 09:49:52 Ready to marshal response ...
	2025/10/25 09:49:52 Ready to write response ...
	2025/10/25 09:50:03 Ready to marshal response ...
	2025/10/25 09:50:03 Ready to write response ...
	2025/10/25 09:50:09 Ready to marshal response ...
	2025/10/25 09:50:09 Ready to write response ...
	2025/10/25 09:50:21 Ready to marshal response ...
	2025/10/25 09:50:21 Ready to write response ...
	2025/10/25 09:52:27 Ready to marshal response ...
	2025/10/25 09:52:27 Ready to write response ...
	
	
	==> kernel <==
	 09:52:30 up  1:35,  0 user,  load average: 0.44, 1.74, 2.64
	Linux addons-184548 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7c269e9ecf5ba5e7cc6f9e493dbabf1e77ceb647f08b3a5c353d7a702b6a7ced] <==
	I1025 09:50:28.230223       1 main.go:301] handling current node
	I1025 09:50:38.234071       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:50:38.234106       1 main.go:301] handling current node
	I1025 09:50:48.232353       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:50:48.232387       1 main.go:301] handling current node
	I1025 09:50:58.230067       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:50:58.230183       1 main.go:301] handling current node
	I1025 09:51:08.231158       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:51:08.231194       1 main.go:301] handling current node
	I1025 09:51:18.229201       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:51:18.229243       1 main.go:301] handling current node
	I1025 09:51:28.231634       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:51:28.231670       1 main.go:301] handling current node
	I1025 09:51:38.232642       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:51:38.232790       1 main.go:301] handling current node
	I1025 09:51:48.230816       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:51:48.230927       1 main.go:301] handling current node
	I1025 09:51:58.230906       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:51:58.230942       1 main.go:301] handling current node
	I1025 09:52:08.230568       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:52:08.230604       1 main.go:301] handling current node
	I1025 09:52:18.227267       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:52:18.227299       1 main.go:301] handling current node
	I1025 09:52:28.227969       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:52:28.228008       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fc1be8cbffe437e2dcdce4f9f5911e6c06e22826e143a64d41168e9a51ae5c09] <==
	E1025 09:47:58.779673       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.89.81:443: connect: connection refused" logger="UnhandledError"
	W1025 09:47:58.822451       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.89.81:443: connect: connection refused
	E1025 09:47:58.822503       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.89.81:443: connect: connection refused" logger="UnhandledError"
	W1025 09:48:22.027848       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 09:48:22.027915       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1025 09:48:22.027940       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1025 09:48:22.030238       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 09:48:22.030329       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1025 09:48:22.030340       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1025 09:48:32.223321       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.55.176:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.55.176:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.55.176:443: connect: connection refused" logger="UnhandledError"
	W1025 09:48:32.223891       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 09:48:32.224090       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1025 09:48:32.224994       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.55.176:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.55.176:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.55.176:443: connect: connection refused" logger="UnhandledError"
	I1025 09:48:32.291049       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1025 09:49:30.788798       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57550: use of closed network connection
	I1025 09:50:08.830340       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1025 09:50:09.132334       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.130.21"}
	I1025 09:50:15.157862       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1025 09:52:28.056103       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.3.62"}
	
	
	==> kube-controller-manager [50b3905935f0c697a82075726d26cb056a12edbb7de740fac083e74c72edde90] <==
	I1025 09:47:16.019638       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-184548"
	I1025 09:47:16.019684       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 09:47:16.019902       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:47:16.020092       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 09:47:16.022568       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:47:16.022731       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:47:16.022976       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:47:16.023037       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:47:16.023254       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:47:16.023503       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 09:47:16.026556       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:47:16.028015       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 09:47:16.030962       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 09:47:16.032117       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 09:47:16.055410       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1025 09:47:45.989644       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1025 09:47:45.989811       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1025 09:47:45.989857       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1025 09:47:46.064921       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1025 09:47:46.069363       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1025 09:47:46.090418       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:47:46.169513       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:48:01.028970       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1025 09:48:16.096701       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1025 09:48:16.178605       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [ea0a2c59127ed1de3b2cc34db8da6794752aca3f85f3ca50dc4bbad88b24d692] <==
	I1025 09:47:18.106764       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:47:18.183558       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:47:18.283790       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:47:18.283829       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 09:47:18.283898       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:47:18.343658       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:47:18.343708       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:47:18.348691       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:47:18.352551       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:47:18.352575       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:47:18.353940       1 config.go:200] "Starting service config controller"
	I1025 09:47:18.353951       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:47:18.353967       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:47:18.353972       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:47:18.354192       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:47:18.354199       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:47:18.354818       1 config.go:309] "Starting node config controller"
	I1025 09:47:18.354826       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:47:18.354832       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:47:18.454466       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:47:18.454503       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:47:18.454560       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a9c067b0e9c58b7946603df485733ff618356cc6b10fde0fb7d121b3320d665a] <==
	E1025 09:47:09.350652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:47:09.350720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:47:09.350773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:47:09.354405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:47:09.354503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:47:09.354581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:47:09.354705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:47:09.354769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:47:09.354836       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:47:09.354900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:47:09.354964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:47:09.355058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:47:09.355124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:47:09.355187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:47:09.355234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:47:09.355285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:47:10.161730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:47:10.202164       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:47:10.333479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:47:10.344190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:47:10.346571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 09:47:10.357880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:47:10.392605       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:47:10.414656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1025 09:47:12.927927       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:50:29 addons-184548 kubelet[1280]: I1025 09:50:29.729336    1280 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a522826f-57bd-4cbe-9317-6ff9f52218a2" path="/var/lib/kubelet/pods/a522826f-57bd-4cbe-9317-6ff9f52218a2/volumes"
	Oct 25 09:50:58 addons-184548 kubelet[1280]: I1025 09:50:58.726901    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-cft48" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:51:20 addons-184548 kubelet[1280]: I1025 09:51:20.727015    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-7sktv" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:51:29 addons-184548 kubelet[1280]: I1025 09:51:29.726923    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-l4vs6" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:52:07 addons-184548 kubelet[1280]: I1025 09:52:07.726893    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-cft48" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:52:08 addons-184548 kubelet[1280]: I1025 09:52:08.928676    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-dk8fg" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:52:11 addons-184548 kubelet[1280]: I1025 09:52:11.168122    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-dk8fg" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:52:11 addons-184548 kubelet[1280]: I1025 09:52:11.169005    1280 scope.go:117] "RemoveContainer" containerID="3b8ae9ea8c71d55bac29ba441afcf692bb1098183cd0557a12ea3964b9e15478"
	Oct 25 09:52:11 addons-184548 kubelet[1280]: E1025 09:52:11.864057    1280 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/427f8b81b1d43eb321c4624b9350f8c8df051e1383f49a6dbc0cebde942a97f2/diff" to get inode usage: stat /var/lib/containers/storage/overlay/427f8b81b1d43eb321c4624b9350f8c8df051e1383f49a6dbc0cebde942a97f2/diff: no such file or directory, extraDiskErr: <nil>
	Oct 25 09:52:12 addons-184548 kubelet[1280]: I1025 09:52:12.090508    1280 scope.go:117] "RemoveContainer" containerID="3b8ae9ea8c71d55bac29ba441afcf692bb1098183cd0557a12ea3964b9e15478"
	Oct 25 09:52:12 addons-184548 kubelet[1280]: I1025 09:52:12.176802    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-dk8fg" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:52:12 addons-184548 kubelet[1280]: I1025 09:52:12.176860    1280 scope.go:117] "RemoveContainer" containerID="c3e2f44a0c594538e283408f7b32dbb98c04a1ffa3e9eb8bc2568c576af1c191"
	Oct 25 09:52:12 addons-184548 kubelet[1280]: E1025 09:52:12.177007    1280 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-dk8fg_kube-system(5d6b936c-c964-41cd-a147-a05337379ebc)\"" pod="kube-system/registry-creds-764b6fb674-dk8fg" podUID="5d6b936c-c964-41cd-a147-a05337379ebc"
	Oct 25 09:52:13 addons-184548 kubelet[1280]: I1025 09:52:13.179992    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-dk8fg" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:52:13 addons-184548 kubelet[1280]: I1025 09:52:13.180049    1280 scope.go:117] "RemoveContainer" containerID="c3e2f44a0c594538e283408f7b32dbb98c04a1ffa3e9eb8bc2568c576af1c191"
	Oct 25 09:52:13 addons-184548 kubelet[1280]: E1025 09:52:13.180193    1280 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-dk8fg_kube-system(5d6b936c-c964-41cd-a147-a05337379ebc)\"" pod="kube-system/registry-creds-764b6fb674-dk8fg" podUID="5d6b936c-c964-41cd-a147-a05337379ebc"
	Oct 25 09:52:25 addons-184548 kubelet[1280]: I1025 09:52:25.727203    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-dk8fg" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:52:25 addons-184548 kubelet[1280]: I1025 09:52:25.727274    1280 scope.go:117] "RemoveContainer" containerID="c3e2f44a0c594538e283408f7b32dbb98c04a1ffa3e9eb8bc2568c576af1c191"
	Oct 25 09:52:26 addons-184548 kubelet[1280]: I1025 09:52:26.228692    1280 scope.go:117] "RemoveContainer" containerID="c3e2f44a0c594538e283408f7b32dbb98c04a1ffa3e9eb8bc2568c576af1c191"
	Oct 25 09:52:26 addons-184548 kubelet[1280]: I1025 09:52:26.229285    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-dk8fg" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:52:26 addons-184548 kubelet[1280]: I1025 09:52:26.229436    1280 scope.go:117] "RemoveContainer" containerID="deedae6e539386921d746f8aca532aaab0e844365e6db7618d91b03cc3c1b04b"
	Oct 25 09:52:26 addons-184548 kubelet[1280]: E1025 09:52:26.229843    1280 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-dk8fg_kube-system(5d6b936c-c964-41cd-a147-a05337379ebc)\"" pod="kube-system/registry-creds-764b6fb674-dk8fg" podUID="5d6b936c-c964-41cd-a147-a05337379ebc"
	Oct 25 09:52:27 addons-184548 kubelet[1280]: I1025 09:52:27.971587    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkb4f\" (UniqueName: \"kubernetes.io/projected/3bcdcdee-2d81-41e4-8e41-26df18b7a9a7-kube-api-access-bkb4f\") pod \"hello-world-app-5d498dc89-qh4wk\" (UID: \"3bcdcdee-2d81-41e4-8e41-26df18b7a9a7\") " pod="default/hello-world-app-5d498dc89-qh4wk"
	Oct 25 09:52:27 addons-184548 kubelet[1280]: I1025 09:52:27.971638    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3bcdcdee-2d81-41e4-8e41-26df18b7a9a7-gcp-creds\") pod \"hello-world-app-5d498dc89-qh4wk\" (UID: \"3bcdcdee-2d81-41e4-8e41-26df18b7a9a7\") " pod="default/hello-world-app-5d498dc89-qh4wk"
	Oct 25 09:52:29 addons-184548 kubelet[1280]: I1025 09:52:29.275986    1280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-qh4wk" podStartSLOduration=1.698721258 podStartE2EDuration="2.275967439s" podCreationTimestamp="2025-10-25 09:52:27 +0000 UTC" firstStartedPulling="2025-10-25 09:52:28.241518243 +0000 UTC m=+316.669125140" lastFinishedPulling="2025-10-25 09:52:28.818764423 +0000 UTC m=+317.246371321" observedRunningTime="2025-10-25 09:52:29.27439053 +0000 UTC m=+317.701997428" watchObservedRunningTime="2025-10-25 09:52:29.275967439 +0000 UTC m=+317.703574345"
	
	
	==> storage-provisioner [6dea6b6abd8b243565d3414c3c5a16625b0cad5dc890e522796ba0eb33521eeb] <==
	W1025 09:52:04.884128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:06.887532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:06.891917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:08.895630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:08.900005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:10.903563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:10.908392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:12.912137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:12.917191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:14.919829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:14.924479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:16.927869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:16.932546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:18.937479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:18.942081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:20.945946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:20.950688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:22.954313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:22.961087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:24.964709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:24.969241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:26.972459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:26.977309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:28.980935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:28.989145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-184548 -n addons-184548
helpers_test.go:269: (dbg) Run:  kubectl --context addons-184548 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-bmfm4 ingress-nginx-admission-patch-cl6qb
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-184548 describe pod ingress-nginx-admission-create-bmfm4 ingress-nginx-admission-patch-cl6qb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-184548 describe pod ingress-nginx-admission-create-bmfm4 ingress-nginx-admission-patch-cl6qb: exit status 1 (118.028155ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-bmfm4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-cl6qb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-184548 describe pod ingress-nginx-admission-create-bmfm4 ingress-nginx-admission-patch-cl6qb: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-184548 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-184548 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (298.167276ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:52:31.408529  271657 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:52:31.409440  271657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:52:31.409480  271657 out.go:374] Setting ErrFile to fd 2...
	I1025 09:52:31.409501  271657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:52:31.409794  271657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 09:52:31.410190  271657 mustload.go:65] Loading cluster: addons-184548
	I1025 09:52:31.410609  271657 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:52:31.410652  271657 addons.go:606] checking whether the cluster is paused
	I1025 09:52:31.410780  271657 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:52:31.410810  271657 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:52:31.411295  271657 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:52:31.430392  271657 ssh_runner.go:195] Run: systemctl --version
	I1025 09:52:31.430450  271657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:52:31.454179  271657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:52:31.565051  271657 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:52:31.565146  271657 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:52:31.606818  271657 cri.go:89] found id: "deedae6e539386921d746f8aca532aaab0e844365e6db7618d91b03cc3c1b04b"
	I1025 09:52:31.606836  271657 cri.go:89] found id: "99ffce70564e9c1e93fb342fe28b7db09d0ba5ccdb15aff7acaf74c0c4c835a8"
	I1025 09:52:31.606841  271657 cri.go:89] found id: "87380913bd0ee1d3b726e240b0c37928db5773727a6e395762d7457cbd38d4d2"
	I1025 09:52:31.606854  271657 cri.go:89] found id: "917acafd898799b4997b73c940a4c49b9b4b9b1ec7117130db5523c09dcb1f37"
	I1025 09:52:31.606858  271657 cri.go:89] found id: "1511789ef92ecf3b8cb712e504e6d6e07218a8ddbec83e34c275e03a4dd81a60"
	I1025 09:52:31.606861  271657 cri.go:89] found id: "fe07be820f29e285303c3f92cd4907117cf3455093a7b79460151610fb5be848"
	I1025 09:52:31.606865  271657 cri.go:89] found id: "353ca1bd958555fff3e868153751459895dcc0e6530c26255c2e4d9e9f3521c5"
	I1025 09:52:31.606867  271657 cri.go:89] found id: "5c51c82f3805654c2a671fc37a2f5c91193162d6105f71af29ec775fb4947a4b"
	I1025 09:52:31.606870  271657 cri.go:89] found id: "d4fb23cf89dec985ddc583a2f99487e957378de0e274adf63ef6bd7ff30d4fbc"
	I1025 09:52:31.606876  271657 cri.go:89] found id: "6064a19490c795de01aeb924e16a842c44be71ff72d249ea57153da7eba1f9fc"
	I1025 09:52:31.606879  271657 cri.go:89] found id: "7c76fe020b3e5b813393dcdeb41b7c271ca356c0349ac3bb6a9b60ce92ee634e"
	I1025 09:52:31.606882  271657 cri.go:89] found id: "7e5a14fca747b468e6ead675759447325ff2660970cd57b772a9fd9fb511fe97"
	I1025 09:52:31.606885  271657 cri.go:89] found id: "9e7e539bbba9851c6b21ddbda7ab05096ff12b3ea52de3fb97926bd8e4354471"
	I1025 09:52:31.606887  271657 cri.go:89] found id: "0a744f4822b1c8196d03b052451e35d3752f739e87a78edb3ff880ada3a62caa"
	I1025 09:52:31.606890  271657 cri.go:89] found id: "97501aeea889609e1bc6cb13a189bcdb0e013cc980a2ff11666a65f22b076e9f"
	I1025 09:52:31.606895  271657 cri.go:89] found id: "fb43fde6f708119239fb4fc0feffd400afbd4640cb335ed22cdbe5b461afc1f8"
	I1025 09:52:31.606898  271657 cri.go:89] found id: "01b285afd58548299d42eb8fb367c019acd34eedf67eecee676c84578ed69e6f"
	I1025 09:52:31.606902  271657 cri.go:89] found id: "6dea6b6abd8b243565d3414c3c5a16625b0cad5dc890e522796ba0eb33521eeb"
	I1025 09:52:31.606905  271657 cri.go:89] found id: "ea0a2c59127ed1de3b2cc34db8da6794752aca3f85f3ca50dc4bbad88b24d692"
	I1025 09:52:31.606908  271657 cri.go:89] found id: "7c269e9ecf5ba5e7cc6f9e493dbabf1e77ceb647f08b3a5c353d7a702b6a7ced"
	I1025 09:52:31.606912  271657 cri.go:89] found id: "a9c067b0e9c58b7946603df485733ff618356cc6b10fde0fb7d121b3320d665a"
	I1025 09:52:31.606915  271657 cri.go:89] found id: "50b3905935f0c697a82075726d26cb056a12edbb7de740fac083e74c72edde90"
	I1025 09:52:31.606918  271657 cri.go:89] found id: "fc1be8cbffe437e2dcdce4f9f5911e6c06e22826e143a64d41168e9a51ae5c09"
	I1025 09:52:31.606920  271657 cri.go:89] found id: "703663e8a09cce37f7cec1e00ab2f874bfe4b4e49042f73b80b547840b775ef2"
	I1025 09:52:31.606924  271657 cri.go:89] found id: ""
	I1025 09:52:31.606974  271657 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:52:31.625239  271657 out.go:203] 
	W1025 09:52:31.628119  271657 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:52:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:52:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:52:31.628142  271657 out.go:285] * 
	* 
	W1025 09:52:31.633317  271657 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:52:31.636290  271657 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-184548 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-184548 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-184548 addons disable ingress --alsologtostderr -v=1: exit status 11 (263.318336ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:52:31.690691  271709 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:52:31.691420  271709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:52:31.691434  271709 out.go:374] Setting ErrFile to fd 2...
	I1025 09:52:31.691440  271709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:52:31.691745  271709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 09:52:31.692081  271709 mustload.go:65] Loading cluster: addons-184548
	I1025 09:52:31.692471  271709 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:52:31.692495  271709 addons.go:606] checking whether the cluster is paused
	I1025 09:52:31.692601  271709 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:52:31.692616  271709 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:52:31.693117  271709 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:52:31.711288  271709 ssh_runner.go:195] Run: systemctl --version
	I1025 09:52:31.711357  271709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:52:31.733834  271709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:52:31.844628  271709 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:52:31.844715  271709 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:52:31.873769  271709 cri.go:89] found id: "deedae6e539386921d746f8aca532aaab0e844365e6db7618d91b03cc3c1b04b"
	I1025 09:52:31.873802  271709 cri.go:89] found id: "99ffce70564e9c1e93fb342fe28b7db09d0ba5ccdb15aff7acaf74c0c4c835a8"
	I1025 09:52:31.873808  271709 cri.go:89] found id: "87380913bd0ee1d3b726e240b0c37928db5773727a6e395762d7457cbd38d4d2"
	I1025 09:52:31.873812  271709 cri.go:89] found id: "917acafd898799b4997b73c940a4c49b9b4b9b1ec7117130db5523c09dcb1f37"
	I1025 09:52:31.873816  271709 cri.go:89] found id: "1511789ef92ecf3b8cb712e504e6d6e07218a8ddbec83e34c275e03a4dd81a60"
	I1025 09:52:31.873820  271709 cri.go:89] found id: "fe07be820f29e285303c3f92cd4907117cf3455093a7b79460151610fb5be848"
	I1025 09:52:31.873823  271709 cri.go:89] found id: "353ca1bd958555fff3e868153751459895dcc0e6530c26255c2e4d9e9f3521c5"
	I1025 09:52:31.873827  271709 cri.go:89] found id: "5c51c82f3805654c2a671fc37a2f5c91193162d6105f71af29ec775fb4947a4b"
	I1025 09:52:31.873831  271709 cri.go:89] found id: "d4fb23cf89dec985ddc583a2f99487e957378de0e274adf63ef6bd7ff30d4fbc"
	I1025 09:52:31.873840  271709 cri.go:89] found id: "6064a19490c795de01aeb924e16a842c44be71ff72d249ea57153da7eba1f9fc"
	I1025 09:52:31.873844  271709 cri.go:89] found id: "7c76fe020b3e5b813393dcdeb41b7c271ca356c0349ac3bb6a9b60ce92ee634e"
	I1025 09:52:31.873854  271709 cri.go:89] found id: "7e5a14fca747b468e6ead675759447325ff2660970cd57b772a9fd9fb511fe97"
	I1025 09:52:31.873857  271709 cri.go:89] found id: "9e7e539bbba9851c6b21ddbda7ab05096ff12b3ea52de3fb97926bd8e4354471"
	I1025 09:52:31.873860  271709 cri.go:89] found id: "0a744f4822b1c8196d03b052451e35d3752f739e87a78edb3ff880ada3a62caa"
	I1025 09:52:31.873863  271709 cri.go:89] found id: "97501aeea889609e1bc6cb13a189bcdb0e013cc980a2ff11666a65f22b076e9f"
	I1025 09:52:31.873872  271709 cri.go:89] found id: "fb43fde6f708119239fb4fc0feffd400afbd4640cb335ed22cdbe5b461afc1f8"
	I1025 09:52:31.873878  271709 cri.go:89] found id: "01b285afd58548299d42eb8fb367c019acd34eedf67eecee676c84578ed69e6f"
	I1025 09:52:31.873888  271709 cri.go:89] found id: "6dea6b6abd8b243565d3414c3c5a16625b0cad5dc890e522796ba0eb33521eeb"
	I1025 09:52:31.873892  271709 cri.go:89] found id: "ea0a2c59127ed1de3b2cc34db8da6794752aca3f85f3ca50dc4bbad88b24d692"
	I1025 09:52:31.873895  271709 cri.go:89] found id: "7c269e9ecf5ba5e7cc6f9e493dbabf1e77ceb647f08b3a5c353d7a702b6a7ced"
	I1025 09:52:31.873900  271709 cri.go:89] found id: "a9c067b0e9c58b7946603df485733ff618356cc6b10fde0fb7d121b3320d665a"
	I1025 09:52:31.873905  271709 cri.go:89] found id: "50b3905935f0c697a82075726d26cb056a12edbb7de740fac083e74c72edde90"
	I1025 09:52:31.873908  271709 cri.go:89] found id: "fc1be8cbffe437e2dcdce4f9f5911e6c06e22826e143a64d41168e9a51ae5c09"
	I1025 09:52:31.873915  271709 cri.go:89] found id: "703663e8a09cce37f7cec1e00ab2f874bfe4b4e49042f73b80b547840b775ef2"
	I1025 09:52:31.873918  271709 cri.go:89] found id: ""
	I1025 09:52:31.873974  271709 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:52:31.889369  271709 out.go:203] 
	W1025 09:52:31.892470  271709 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:52:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:52:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:52:31.892496  271709 out.go:285] * 
	* 
	W1025 09:52:31.897443  271709 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:52:31.900512  271709 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-184548 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (143.40s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-wc7b2" [8349f93f-6893-4172-847c-19c612f84436] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004195034s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-184548 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-184548 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (264.792458ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:50:08.294276  269659 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:50:08.295247  269659 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:50:08.295292  269659 out.go:374] Setting ErrFile to fd 2...
	I1025 09:50:08.295313  269659 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:50:08.295610  269659 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 09:50:08.296044  269659 mustload.go:65] Loading cluster: addons-184548
	I1025 09:50:08.296458  269659 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:50:08.296509  269659 addons.go:606] checking whether the cluster is paused
	I1025 09:50:08.296634  269659 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:50:08.296669  269659 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:50:08.297129  269659 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:50:08.314857  269659 ssh_runner.go:195] Run: systemctl --version
	I1025 09:50:08.314920  269659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:50:08.338042  269659 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:50:08.444917  269659 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:50:08.445000  269659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:50:08.474381  269659 cri.go:89] found id: "99ffce70564e9c1e93fb342fe28b7db09d0ba5ccdb15aff7acaf74c0c4c835a8"
	I1025 09:50:08.474402  269659 cri.go:89] found id: "87380913bd0ee1d3b726e240b0c37928db5773727a6e395762d7457cbd38d4d2"
	I1025 09:50:08.474406  269659 cri.go:89] found id: "917acafd898799b4997b73c940a4c49b9b4b9b1ec7117130db5523c09dcb1f37"
	I1025 09:50:08.474410  269659 cri.go:89] found id: "1511789ef92ecf3b8cb712e504e6d6e07218a8ddbec83e34c275e03a4dd81a60"
	I1025 09:50:08.474413  269659 cri.go:89] found id: "fe07be820f29e285303c3f92cd4907117cf3455093a7b79460151610fb5be848"
	I1025 09:50:08.474417  269659 cri.go:89] found id: "353ca1bd958555fff3e868153751459895dcc0e6530c26255c2e4d9e9f3521c5"
	I1025 09:50:08.474420  269659 cri.go:89] found id: "5c51c82f3805654c2a671fc37a2f5c91193162d6105f71af29ec775fb4947a4b"
	I1025 09:50:08.474423  269659 cri.go:89] found id: "d4fb23cf89dec985ddc583a2f99487e957378de0e274adf63ef6bd7ff30d4fbc"
	I1025 09:50:08.474426  269659 cri.go:89] found id: "6064a19490c795de01aeb924e16a842c44be71ff72d249ea57153da7eba1f9fc"
	I1025 09:50:08.474433  269659 cri.go:89] found id: "7c76fe020b3e5b813393dcdeb41b7c271ca356c0349ac3bb6a9b60ce92ee634e"
	I1025 09:50:08.474436  269659 cri.go:89] found id: "7e5a14fca747b468e6ead675759447325ff2660970cd57b772a9fd9fb511fe97"
	I1025 09:50:08.474439  269659 cri.go:89] found id: "9e7e539bbba9851c6b21ddbda7ab05096ff12b3ea52de3fb97926bd8e4354471"
	I1025 09:50:08.474442  269659 cri.go:89] found id: "0a744f4822b1c8196d03b052451e35d3752f739e87a78edb3ff880ada3a62caa"
	I1025 09:50:08.474445  269659 cri.go:89] found id: "97501aeea889609e1bc6cb13a189bcdb0e013cc980a2ff11666a65f22b076e9f"
	I1025 09:50:08.474448  269659 cri.go:89] found id: "fb43fde6f708119239fb4fc0feffd400afbd4640cb335ed22cdbe5b461afc1f8"
	I1025 09:50:08.474453  269659 cri.go:89] found id: "01b285afd58548299d42eb8fb367c019acd34eedf67eecee676c84578ed69e6f"
	I1025 09:50:08.474456  269659 cri.go:89] found id: "6dea6b6abd8b243565d3414c3c5a16625b0cad5dc890e522796ba0eb33521eeb"
	I1025 09:50:08.474460  269659 cri.go:89] found id: "ea0a2c59127ed1de3b2cc34db8da6794752aca3f85f3ca50dc4bbad88b24d692"
	I1025 09:50:08.474464  269659 cri.go:89] found id: "7c269e9ecf5ba5e7cc6f9e493dbabf1e77ceb647f08b3a5c353d7a702b6a7ced"
	I1025 09:50:08.474467  269659 cri.go:89] found id: "a9c067b0e9c58b7946603df485733ff618356cc6b10fde0fb7d121b3320d665a"
	I1025 09:50:08.474472  269659 cri.go:89] found id: "50b3905935f0c697a82075726d26cb056a12edbb7de740fac083e74c72edde90"
	I1025 09:50:08.474475  269659 cri.go:89] found id: "fc1be8cbffe437e2dcdce4f9f5911e6c06e22826e143a64d41168e9a51ae5c09"
	I1025 09:50:08.474482  269659 cri.go:89] found id: "703663e8a09cce37f7cec1e00ab2f874bfe4b4e49042f73b80b547840b775ef2"
	I1025 09:50:08.474485  269659 cri.go:89] found id: ""
	I1025 09:50:08.474535  269659 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:50:08.490538  269659 out.go:203] 
	W1025 09:50:08.493436  269659 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:50:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:50:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:50:08.493462  269659 out.go:285] * 
	* 
	W1025 09:50:08.498578  269659 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:50:08.501650  269659 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-184548 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.350176ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-5mbb4" [473fd1dc-bcd8-4299-ae17-9e3bca061756] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005069467s
addons_test.go:463: (dbg) Run:  kubectl --context addons-184548 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-184548 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-184548 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (275.733844ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:50:02.023222  269430 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:50:02.024214  269430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:50:02.024269  269430 out.go:374] Setting ErrFile to fd 2...
	I1025 09:50:02.024295  269430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:50:02.024661  269430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 09:50:02.025092  269430 mustload.go:65] Loading cluster: addons-184548
	I1025 09:50:02.025783  269430 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:50:02.025846  269430 addons.go:606] checking whether the cluster is paused
	I1025 09:50:02.026048  269430 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:50:02.026087  269430 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:50:02.026683  269430 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:50:02.045778  269430 ssh_runner.go:195] Run: systemctl --version
	I1025 09:50:02.045830  269430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:50:02.065755  269430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:50:02.168738  269430 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:50:02.168889  269430 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:50:02.202113  269430 cri.go:89] found id: "99ffce70564e9c1e93fb342fe28b7db09d0ba5ccdb15aff7acaf74c0c4c835a8"
	I1025 09:50:02.202177  269430 cri.go:89] found id: "87380913bd0ee1d3b726e240b0c37928db5773727a6e395762d7457cbd38d4d2"
	I1025 09:50:02.202203  269430 cri.go:89] found id: "917acafd898799b4997b73c940a4c49b9b4b9b1ec7117130db5523c09dcb1f37"
	I1025 09:50:02.202229  269430 cri.go:89] found id: "1511789ef92ecf3b8cb712e504e6d6e07218a8ddbec83e34c275e03a4dd81a60"
	I1025 09:50:02.202266  269430 cri.go:89] found id: "fe07be820f29e285303c3f92cd4907117cf3455093a7b79460151610fb5be848"
	I1025 09:50:02.202294  269430 cri.go:89] found id: "353ca1bd958555fff3e868153751459895dcc0e6530c26255c2e4d9e9f3521c5"
	I1025 09:50:02.202318  269430 cri.go:89] found id: "5c51c82f3805654c2a671fc37a2f5c91193162d6105f71af29ec775fb4947a4b"
	I1025 09:50:02.202342  269430 cri.go:89] found id: "d4fb23cf89dec985ddc583a2f99487e957378de0e274adf63ef6bd7ff30d4fbc"
	I1025 09:50:02.202375  269430 cri.go:89] found id: "6064a19490c795de01aeb924e16a842c44be71ff72d249ea57153da7eba1f9fc"
	I1025 09:50:02.202403  269430 cri.go:89] found id: "7c76fe020b3e5b813393dcdeb41b7c271ca356c0349ac3bb6a9b60ce92ee634e"
	I1025 09:50:02.202425  269430 cri.go:89] found id: "7e5a14fca747b468e6ead675759447325ff2660970cd57b772a9fd9fb511fe97"
	I1025 09:50:02.202448  269430 cri.go:89] found id: "9e7e539bbba9851c6b21ddbda7ab05096ff12b3ea52de3fb97926bd8e4354471"
	I1025 09:50:02.202479  269430 cri.go:89] found id: "0a744f4822b1c8196d03b052451e35d3752f739e87a78edb3ff880ada3a62caa"
	I1025 09:50:02.202504  269430 cri.go:89] found id: "97501aeea889609e1bc6cb13a189bcdb0e013cc980a2ff11666a65f22b076e9f"
	I1025 09:50:02.202526  269430 cri.go:89] found id: "fb43fde6f708119239fb4fc0feffd400afbd4640cb335ed22cdbe5b461afc1f8"
	I1025 09:50:02.202560  269430 cri.go:89] found id: "01b285afd58548299d42eb8fb367c019acd34eedf67eecee676c84578ed69e6f"
	I1025 09:50:02.202607  269430 cri.go:89] found id: "6dea6b6abd8b243565d3414c3c5a16625b0cad5dc890e522796ba0eb33521eeb"
	I1025 09:50:02.202636  269430 cri.go:89] found id: "ea0a2c59127ed1de3b2cc34db8da6794752aca3f85f3ca50dc4bbad88b24d692"
	I1025 09:50:02.202658  269430 cri.go:89] found id: "7c269e9ecf5ba5e7cc6f9e493dbabf1e77ceb647f08b3a5c353d7a702b6a7ced"
	I1025 09:50:02.202677  269430 cri.go:89] found id: "a9c067b0e9c58b7946603df485733ff618356cc6b10fde0fb7d121b3320d665a"
	I1025 09:50:02.202714  269430 cri.go:89] found id: "50b3905935f0c697a82075726d26cb056a12edbb7de740fac083e74c72edde90"
	I1025 09:50:02.202737  269430 cri.go:89] found id: "fc1be8cbffe437e2dcdce4f9f5911e6c06e22826e143a64d41168e9a51ae5c09"
	I1025 09:50:02.202756  269430 cri.go:89] found id: "703663e8a09cce37f7cec1e00ab2f874bfe4b4e49042f73b80b547840b775ef2"
	I1025 09:50:02.202776  269430 cri.go:89] found id: ""
	I1025 09:50:02.202867  269430 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:50:02.218342  269430 out.go:203] 
	W1025 09:50:02.221161  269430 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:50:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:50:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:50:02.221194  269430 out.go:285] * 
	* 
	W1025 09:50:02.226994  269430 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:50:02.230203  269430 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-184548 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.39s)

                                                
                                    
x
+
TestAddons/parallel/CSI (36.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1025 09:49:53.521005  261256 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1025 09:49:53.527237  261256 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1025 09:49:53.528729  261256 kapi.go:107] duration metric: took 6.268257ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 7.784259ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-184548 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-184548 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-184548 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-184548 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-184548 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-184548 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-184548 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-184548 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-184548 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-184548 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-184548 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-184548 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [b89a3a85-d3dd-4a19-8350-97f9f0bf6eab] Pending
helpers_test.go:352: "task-pv-pod" [b89a3a85-d3dd-4a19-8350-97f9f0bf6eab] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [b89a3a85-d3dd-4a19-8350-97f9f0bf6eab] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.010180557s
addons_test.go:572: (dbg) Run:  kubectl --context addons-184548 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-184548 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-184548 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-184548 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-184548 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-184548 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-184548 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-184548 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-184548 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-184548 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-184548 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-184548 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [a522826f-57bd-4cbe-9317-6ff9f52218a2] Pending
helpers_test.go:352: "task-pv-pod-restore" [a522826f-57bd-4cbe-9317-6ff9f52218a2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [a522826f-57bd-4cbe-9317-6ff9f52218a2] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003332897s
addons_test.go:614: (dbg) Run:  kubectl --context addons-184548 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-184548 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-184548 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-184548 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-184548 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (283.832582ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:50:29.280229  270361 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:50:29.281030  270361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:50:29.281046  270361 out.go:374] Setting ErrFile to fd 2...
	I1025 09:50:29.281052  270361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:50:29.281350  270361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 09:50:29.281703  270361 mustload.go:65] Loading cluster: addons-184548
	I1025 09:50:29.282143  270361 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:50:29.282183  270361 addons.go:606] checking whether the cluster is paused
	I1025 09:50:29.282320  270361 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:50:29.282338  270361 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:50:29.282855  270361 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:50:29.300356  270361 ssh_runner.go:195] Run: systemctl --version
	I1025 09:50:29.300420  270361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:50:29.322928  270361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:50:29.441156  270361 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:50:29.441294  270361 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:50:29.470508  270361 cri.go:89] found id: "99ffce70564e9c1e93fb342fe28b7db09d0ba5ccdb15aff7acaf74c0c4c835a8"
	I1025 09:50:29.470532  270361 cri.go:89] found id: "87380913bd0ee1d3b726e240b0c37928db5773727a6e395762d7457cbd38d4d2"
	I1025 09:50:29.470537  270361 cri.go:89] found id: "917acafd898799b4997b73c940a4c49b9b4b9b1ec7117130db5523c09dcb1f37"
	I1025 09:50:29.470541  270361 cri.go:89] found id: "1511789ef92ecf3b8cb712e504e6d6e07218a8ddbec83e34c275e03a4dd81a60"
	I1025 09:50:29.470545  270361 cri.go:89] found id: "fe07be820f29e285303c3f92cd4907117cf3455093a7b79460151610fb5be848"
	I1025 09:50:29.470549  270361 cri.go:89] found id: "353ca1bd958555fff3e868153751459895dcc0e6530c26255c2e4d9e9f3521c5"
	I1025 09:50:29.470552  270361 cri.go:89] found id: "5c51c82f3805654c2a671fc37a2f5c91193162d6105f71af29ec775fb4947a4b"
	I1025 09:50:29.470555  270361 cri.go:89] found id: "d4fb23cf89dec985ddc583a2f99487e957378de0e274adf63ef6bd7ff30d4fbc"
	I1025 09:50:29.470558  270361 cri.go:89] found id: "6064a19490c795de01aeb924e16a842c44be71ff72d249ea57153da7eba1f9fc"
	I1025 09:50:29.470568  270361 cri.go:89] found id: "7c76fe020b3e5b813393dcdeb41b7c271ca356c0349ac3bb6a9b60ce92ee634e"
	I1025 09:50:29.470571  270361 cri.go:89] found id: "7e5a14fca747b468e6ead675759447325ff2660970cd57b772a9fd9fb511fe97"
	I1025 09:50:29.470575  270361 cri.go:89] found id: "9e7e539bbba9851c6b21ddbda7ab05096ff12b3ea52de3fb97926bd8e4354471"
	I1025 09:50:29.470578  270361 cri.go:89] found id: "0a744f4822b1c8196d03b052451e35d3752f739e87a78edb3ff880ada3a62caa"
	I1025 09:50:29.470582  270361 cri.go:89] found id: "97501aeea889609e1bc6cb13a189bcdb0e013cc980a2ff11666a65f22b076e9f"
	I1025 09:50:29.470585  270361 cri.go:89] found id: "fb43fde6f708119239fb4fc0feffd400afbd4640cb335ed22cdbe5b461afc1f8"
	I1025 09:50:29.470593  270361 cri.go:89] found id: "01b285afd58548299d42eb8fb367c019acd34eedf67eecee676c84578ed69e6f"
	I1025 09:50:29.470600  270361 cri.go:89] found id: "6dea6b6abd8b243565d3414c3c5a16625b0cad5dc890e522796ba0eb33521eeb"
	I1025 09:50:29.470605  270361 cri.go:89] found id: "ea0a2c59127ed1de3b2cc34db8da6794752aca3f85f3ca50dc4bbad88b24d692"
	I1025 09:50:29.470608  270361 cri.go:89] found id: "7c269e9ecf5ba5e7cc6f9e493dbabf1e77ceb647f08b3a5c353d7a702b6a7ced"
	I1025 09:50:29.470612  270361 cri.go:89] found id: "a9c067b0e9c58b7946603df485733ff618356cc6b10fde0fb7d121b3320d665a"
	I1025 09:50:29.470616  270361 cri.go:89] found id: "50b3905935f0c697a82075726d26cb056a12edbb7de740fac083e74c72edde90"
	I1025 09:50:29.470619  270361 cri.go:89] found id: "fc1be8cbffe437e2dcdce4f9f5911e6c06e22826e143a64d41168e9a51ae5c09"
	I1025 09:50:29.470622  270361 cri.go:89] found id: "703663e8a09cce37f7cec1e00ab2f874bfe4b4e49042f73b80b547840b775ef2"
	I1025 09:50:29.470625  270361 cri.go:89] found id: ""
	I1025 09:50:29.470672  270361 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:50:29.487823  270361 out.go:203] 
	W1025 09:50:29.490661  270361 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:50:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:50:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:50:29.490707  270361 out.go:285] * 
	* 
	W1025 09:50:29.495773  270361 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:50:29.498941  270361 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-184548 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-184548 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-184548 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (283.66915ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:50:29.561358  270406 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:50:29.562291  270406 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:50:29.562311  270406 out.go:374] Setting ErrFile to fd 2...
	I1025 09:50:29.562319  270406 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:50:29.562628  270406 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 09:50:29.562948  270406 mustload.go:65] Loading cluster: addons-184548
	I1025 09:50:29.563354  270406 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:50:29.563391  270406 addons.go:606] checking whether the cluster is paused
	I1025 09:50:29.563525  270406 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:50:29.563543  270406 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:50:29.564073  270406 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:50:29.583192  270406 ssh_runner.go:195] Run: systemctl --version
	I1025 09:50:29.583253  270406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:50:29.605577  270406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:50:29.716471  270406 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:50:29.716574  270406 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:50:29.756682  270406 cri.go:89] found id: "99ffce70564e9c1e93fb342fe28b7db09d0ba5ccdb15aff7acaf74c0c4c835a8"
	I1025 09:50:29.756711  270406 cri.go:89] found id: "87380913bd0ee1d3b726e240b0c37928db5773727a6e395762d7457cbd38d4d2"
	I1025 09:50:29.756716  270406 cri.go:89] found id: "917acafd898799b4997b73c940a4c49b9b4b9b1ec7117130db5523c09dcb1f37"
	I1025 09:50:29.756719  270406 cri.go:89] found id: "1511789ef92ecf3b8cb712e504e6d6e07218a8ddbec83e34c275e03a4dd81a60"
	I1025 09:50:29.756722  270406 cri.go:89] found id: "fe07be820f29e285303c3f92cd4907117cf3455093a7b79460151610fb5be848"
	I1025 09:50:29.756726  270406 cri.go:89] found id: "353ca1bd958555fff3e868153751459895dcc0e6530c26255c2e4d9e9f3521c5"
	I1025 09:50:29.756729  270406 cri.go:89] found id: "5c51c82f3805654c2a671fc37a2f5c91193162d6105f71af29ec775fb4947a4b"
	I1025 09:50:29.756732  270406 cri.go:89] found id: "d4fb23cf89dec985ddc583a2f99487e957378de0e274adf63ef6bd7ff30d4fbc"
	I1025 09:50:29.756735  270406 cri.go:89] found id: "6064a19490c795de01aeb924e16a842c44be71ff72d249ea57153da7eba1f9fc"
	I1025 09:50:29.756741  270406 cri.go:89] found id: "7c76fe020b3e5b813393dcdeb41b7c271ca356c0349ac3bb6a9b60ce92ee634e"
	I1025 09:50:29.756744  270406 cri.go:89] found id: "7e5a14fca747b468e6ead675759447325ff2660970cd57b772a9fd9fb511fe97"
	I1025 09:50:29.756748  270406 cri.go:89] found id: "9e7e539bbba9851c6b21ddbda7ab05096ff12b3ea52de3fb97926bd8e4354471"
	I1025 09:50:29.756751  270406 cri.go:89] found id: "0a744f4822b1c8196d03b052451e35d3752f739e87a78edb3ff880ada3a62caa"
	I1025 09:50:29.756754  270406 cri.go:89] found id: "97501aeea889609e1bc6cb13a189bcdb0e013cc980a2ff11666a65f22b076e9f"
	I1025 09:50:29.756757  270406 cri.go:89] found id: "fb43fde6f708119239fb4fc0feffd400afbd4640cb335ed22cdbe5b461afc1f8"
	I1025 09:50:29.756762  270406 cri.go:89] found id: "01b285afd58548299d42eb8fb367c019acd34eedf67eecee676c84578ed69e6f"
	I1025 09:50:29.756765  270406 cri.go:89] found id: "6dea6b6abd8b243565d3414c3c5a16625b0cad5dc890e522796ba0eb33521eeb"
	I1025 09:50:29.756768  270406 cri.go:89] found id: "ea0a2c59127ed1de3b2cc34db8da6794752aca3f85f3ca50dc4bbad88b24d692"
	I1025 09:50:29.756771  270406 cri.go:89] found id: "7c269e9ecf5ba5e7cc6f9e493dbabf1e77ceb647f08b3a5c353d7a702b6a7ced"
	I1025 09:50:29.756774  270406 cri.go:89] found id: "a9c067b0e9c58b7946603df485733ff618356cc6b10fde0fb7d121b3320d665a"
	I1025 09:50:29.756779  270406 cri.go:89] found id: "50b3905935f0c697a82075726d26cb056a12edbb7de740fac083e74c72edde90"
	I1025 09:50:29.756782  270406 cri.go:89] found id: "fc1be8cbffe437e2dcdce4f9f5911e6c06e22826e143a64d41168e9a51ae5c09"
	I1025 09:50:29.756785  270406 cri.go:89] found id: "703663e8a09cce37f7cec1e00ab2f874bfe4b4e49042f73b80b547840b775ef2"
	I1025 09:50:29.756788  270406 cri.go:89] found id: ""
	I1025 09:50:29.756849  270406 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:50:29.773636  270406 out.go:203] 
	W1025 09:50:29.776707  270406 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:50:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:50:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:50:29.776734  270406 out.go:285] * 
	* 
	W1025 09:50:29.781769  270406 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:50:29.784712  270406 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-184548 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (36.27s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-184548 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-184548 --alsologtostderr -v=1: exit status 11 (367.346416ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:49:53.228974  268747 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:49:53.229954  268747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:49:53.230092  268747 out.go:374] Setting ErrFile to fd 2...
	I1025 09:49:53.230115  268747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:49:53.230567  268747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 09:49:53.233436  268747 mustload.go:65] Loading cluster: addons-184548
	I1025 09:49:53.233885  268747 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:49:53.233930  268747 addons.go:606] checking whether the cluster is paused
	I1025 09:49:53.234081  268747 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:49:53.234116  268747 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:49:53.234641  268747 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:49:53.260333  268747 ssh_runner.go:195] Run: systemctl --version
	I1025 09:49:53.260380  268747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:49:53.297802  268747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:49:53.404967  268747 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:49:53.405052  268747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:49:53.437889  268747 cri.go:89] found id: "99ffce70564e9c1e93fb342fe28b7db09d0ba5ccdb15aff7acaf74c0c4c835a8"
	I1025 09:49:53.437913  268747 cri.go:89] found id: "87380913bd0ee1d3b726e240b0c37928db5773727a6e395762d7457cbd38d4d2"
	I1025 09:49:53.437918  268747 cri.go:89] found id: "917acafd898799b4997b73c940a4c49b9b4b9b1ec7117130db5523c09dcb1f37"
	I1025 09:49:53.437922  268747 cri.go:89] found id: "1511789ef92ecf3b8cb712e504e6d6e07218a8ddbec83e34c275e03a4dd81a60"
	I1025 09:49:53.437926  268747 cri.go:89] found id: "fe07be820f29e285303c3f92cd4907117cf3455093a7b79460151610fb5be848"
	I1025 09:49:53.437929  268747 cri.go:89] found id: "353ca1bd958555fff3e868153751459895dcc0e6530c26255c2e4d9e9f3521c5"
	I1025 09:49:53.437933  268747 cri.go:89] found id: "5c51c82f3805654c2a671fc37a2f5c91193162d6105f71af29ec775fb4947a4b"
	I1025 09:49:53.437937  268747 cri.go:89] found id: "d4fb23cf89dec985ddc583a2f99487e957378de0e274adf63ef6bd7ff30d4fbc"
	I1025 09:49:53.437940  268747 cri.go:89] found id: "6064a19490c795de01aeb924e16a842c44be71ff72d249ea57153da7eba1f9fc"
	I1025 09:49:53.437946  268747 cri.go:89] found id: "7c76fe020b3e5b813393dcdeb41b7c271ca356c0349ac3bb6a9b60ce92ee634e"
	I1025 09:49:53.437950  268747 cri.go:89] found id: "7e5a14fca747b468e6ead675759447325ff2660970cd57b772a9fd9fb511fe97"
	I1025 09:49:53.437954  268747 cri.go:89] found id: "9e7e539bbba9851c6b21ddbda7ab05096ff12b3ea52de3fb97926bd8e4354471"
	I1025 09:49:53.437961  268747 cri.go:89] found id: "0a744f4822b1c8196d03b052451e35d3752f739e87a78edb3ff880ada3a62caa"
	I1025 09:49:53.437964  268747 cri.go:89] found id: "97501aeea889609e1bc6cb13a189bcdb0e013cc980a2ff11666a65f22b076e9f"
	I1025 09:49:53.437968  268747 cri.go:89] found id: "fb43fde6f708119239fb4fc0feffd400afbd4640cb335ed22cdbe5b461afc1f8"
	I1025 09:49:53.438008  268747 cri.go:89] found id: "01b285afd58548299d42eb8fb367c019acd34eedf67eecee676c84578ed69e6f"
	I1025 09:49:53.438013  268747 cri.go:89] found id: "6dea6b6abd8b243565d3414c3c5a16625b0cad5dc890e522796ba0eb33521eeb"
	I1025 09:49:53.438017  268747 cri.go:89] found id: "ea0a2c59127ed1de3b2cc34db8da6794752aca3f85f3ca50dc4bbad88b24d692"
	I1025 09:49:53.438021  268747 cri.go:89] found id: "7c269e9ecf5ba5e7cc6f9e493dbabf1e77ceb647f08b3a5c353d7a702b6a7ced"
	I1025 09:49:53.438024  268747 cri.go:89] found id: "a9c067b0e9c58b7946603df485733ff618356cc6b10fde0fb7d121b3320d665a"
	I1025 09:49:53.438031  268747 cri.go:89] found id: "50b3905935f0c697a82075726d26cb056a12edbb7de740fac083e74c72edde90"
	I1025 09:49:53.438037  268747 cri.go:89] found id: "fc1be8cbffe437e2dcdce4f9f5911e6c06e22826e143a64d41168e9a51ae5c09"
	I1025 09:49:53.438040  268747 cri.go:89] found id: "703663e8a09cce37f7cec1e00ab2f874bfe4b4e49042f73b80b547840b775ef2"
	I1025 09:49:53.438044  268747 cri.go:89] found id: ""
	I1025 09:49:53.438095  268747 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:49:53.471232  268747 out.go:203] 
	W1025 09:49:53.474542  268747 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:49:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:49:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:49:53.474585  268747 out.go:285] * 
	* 
	W1025 09:49:53.481194  268747 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:49:53.484647  268747 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-184548 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-184548
helpers_test.go:243: (dbg) docker inspect addons-184548:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d746aa6cc56e4e44363f4ec763338bfeda3edf0d6d5944949e8319800c322ffa",
	        "Created": "2025-10-25T09:46:43.864888409Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 262403,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:46:43.925349297Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/d746aa6cc56e4e44363f4ec763338bfeda3edf0d6d5944949e8319800c322ffa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d746aa6cc56e4e44363f4ec763338bfeda3edf0d6d5944949e8319800c322ffa/hostname",
	        "HostsPath": "/var/lib/docker/containers/d746aa6cc56e4e44363f4ec763338bfeda3edf0d6d5944949e8319800c322ffa/hosts",
	        "LogPath": "/var/lib/docker/containers/d746aa6cc56e4e44363f4ec763338bfeda3edf0d6d5944949e8319800c322ffa/d746aa6cc56e4e44363f4ec763338bfeda3edf0d6d5944949e8319800c322ffa-json.log",
	        "Name": "/addons-184548",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-184548:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-184548",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d746aa6cc56e4e44363f4ec763338bfeda3edf0d6d5944949e8319800c322ffa",
	                "LowerDir": "/var/lib/docker/overlay2/70a2730a7c6d8a28c641099609d27ac2418e31332416ad60480de8113ee47513-init/diff:/var/lib/docker/overlay2/a746402fafa86965bd9394d930bb96ec16982037f9f5e7f4f1de6d09576ff851/diff",
	                "MergedDir": "/var/lib/docker/overlay2/70a2730a7c6d8a28c641099609d27ac2418e31332416ad60480de8113ee47513/merged",
	                "UpperDir": "/var/lib/docker/overlay2/70a2730a7c6d8a28c641099609d27ac2418e31332416ad60480de8113ee47513/diff",
	                "WorkDir": "/var/lib/docker/overlay2/70a2730a7c6d8a28c641099609d27ac2418e31332416ad60480de8113ee47513/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-184548",
	                "Source": "/var/lib/docker/volumes/addons-184548/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-184548",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-184548",
	                "name.minikube.sigs.k8s.io": "addons-184548",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1f4b1502031e199b68d3ceebdd2c1ed9f60c627fb314ed5892653a598b960c8b",
	            "SandboxKey": "/var/run/docker/netns/1f4b1502031e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-184548": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:90:e3:f2:e5:cc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8d8057e9a9ef0fb708e302fb11c8c51feb3894af3ea427677c9c6034fe8ed2ba",
	                    "EndpointID": "614a9a5549e8d82f9b7f8c5c5fbb79a6845a9ec993e865a980c8bb97a67b310b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-184548",
	                        "d746aa6cc56e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-184548 -n addons-184548
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-184548 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-184548 logs -n 25: (1.627116808s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-770401 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-770401   │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:46 UTC │
	│ delete  │ -p download-only-770401                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-770401   │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:46 UTC │
	│ start   │ -o=json --download-only -p download-only-865577 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-865577   │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:46 UTC │
	│ delete  │ -p download-only-865577                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-865577   │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:46 UTC │
	│ delete  │ -p download-only-770401                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-770401   │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:46 UTC │
	│ delete  │ -p download-only-865577                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-865577   │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:46 UTC │
	│ start   │ --download-only -p download-docker-540570 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-540570 │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │                     │
	│ delete  │ -p download-docker-540570                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-540570 │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:46 UTC │
	│ start   │ --download-only -p binary-mirror-439045 --alsologtostderr --binary-mirror http://127.0.0.1:39931 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-439045   │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │                     │
	│ delete  │ -p binary-mirror-439045                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-439045   │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:46 UTC │
	│ addons  │ enable dashboard -p addons-184548                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │                     │
	│ addons  │ disable dashboard -p addons-184548                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │                     │
	│ start   │ -p addons-184548 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:49 UTC │
	│ addons  │ addons-184548 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:49 UTC │                     │
	│ addons  │ addons-184548 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:49 UTC │                     │
	│ addons  │ addons-184548 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:49 UTC │                     │
	│ addons  │ addons-184548 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:49 UTC │                     │
	│ ip      │ addons-184548 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:49 UTC │ 25 Oct 25 09:49 UTC │
	│ addons  │ addons-184548 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:49 UTC │                     │
	│ ssh     │ addons-184548 ssh cat /opt/local-path-provisioner/pvc-d640ae38-7d8c-48d1-83d2-821c86fec4ed_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:49 UTC │ 25 Oct 25 09:49 UTC │
	│ addons  │ addons-184548 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:49 UTC │                     │
	│ addons  │ enable headlamp -p addons-184548 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:49 UTC │                     │
	│ addons  │ addons-184548 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-184548          │ jenkins │ v1.37.0 │ 25 Oct 25 09:49 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:46:17
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:46:17.798034  262001 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:46:17.798150  262001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:46:17.798161  262001 out.go:374] Setting ErrFile to fd 2...
	I1025 09:46:17.798167  262001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:46:17.798443  262001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 09:46:17.798921  262001 out.go:368] Setting JSON to false
	I1025 09:46:17.799739  262001 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5329,"bootTime":1761380249,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 09:46:17.799813  262001 start.go:141] virtualization:  
	I1025 09:46:17.803169  262001 out.go:179] * [addons-184548] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:46:17.806898  262001 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 09:46:17.806997  262001 notify.go:220] Checking for updates...
	I1025 09:46:17.812937  262001 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:46:17.815822  262001 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 09:46:17.818629  262001 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 09:46:17.821549  262001 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 09:46:17.824451  262001 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:46:17.827632  262001 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:46:17.851753  262001 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:46:17.851890  262001 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:46:17.912370  262001 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-25 09:46:17.902522334 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:46:17.912475  262001 docker.go:318] overlay module found
	I1025 09:46:17.915659  262001 out.go:179] * Using the docker driver based on user configuration
	I1025 09:46:17.918475  262001 start.go:305] selected driver: docker
	I1025 09:46:17.918496  262001 start.go:925] validating driver "docker" against <nil>
	I1025 09:46:17.918511  262001 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:46:17.919230  262001 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:46:17.971680  262001 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-25 09:46:17.962443717 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:46:17.971837  262001 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:46:17.972084  262001 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:46:17.974996  262001 out.go:179] * Using Docker driver with root privileges
	I1025 09:46:17.978124  262001 cni.go:84] Creating CNI manager for ""
	I1025 09:46:17.978191  262001 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:46:17.978201  262001 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:46:17.978290  262001 start.go:349] cluster config:
	{Name:addons-184548 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-184548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1025 09:46:17.981612  262001 out.go:179] * Starting "addons-184548" primary control-plane node in "addons-184548" cluster
	I1025 09:46:17.984581  262001 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:46:17.987767  262001 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:46:17.990713  262001 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:46:17.990970  262001 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:46:17.991013  262001 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 09:46:17.991033  262001 cache.go:58] Caching tarball of preloaded images
	I1025 09:46:17.991111  262001 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 09:46:17.991125  262001 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:46:17.991469  262001 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/config.json ...
	I1025 09:46:17.991498  262001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/config.json: {Name:mk36831340e80edd5b284df694d7fb9085ffb2c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:18.019742  262001 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 09:46:18.019900  262001 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1025 09:46:18.019928  262001 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1025 09:46:18.019937  262001 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1025 09:46:18.019946  262001 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1025 09:46:18.019951  262001 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1025 09:46:35.926538  262001 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1025 09:46:35.926580  262001 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:46:35.926626  262001 start.go:360] acquireMachinesLock for addons-184548: {Name:mkee07b743b61356246760cb6ca511eba06d1efd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:46:35.926739  262001 start.go:364] duration metric: took 89.002µs to acquireMachinesLock for "addons-184548"
	I1025 09:46:35.926771  262001 start.go:93] Provisioning new machine with config: &{Name:addons-184548 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-184548 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:46:35.926857  262001 start.go:125] createHost starting for "" (driver="docker")
	I1025 09:46:35.930325  262001 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1025 09:46:35.930578  262001 start.go:159] libmachine.API.Create for "addons-184548" (driver="docker")
	I1025 09:46:35.930615  262001 client.go:168] LocalClient.Create starting
	I1025 09:46:35.930740  262001 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem
	I1025 09:46:36.530284  262001 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem
	I1025 09:46:37.079619  262001 cli_runner.go:164] Run: docker network inspect addons-184548 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:46:37.097254  262001 cli_runner.go:211] docker network inspect addons-184548 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:46:37.097370  262001 network_create.go:284] running [docker network inspect addons-184548] to gather additional debugging logs...
	I1025 09:46:37.097392  262001 cli_runner.go:164] Run: docker network inspect addons-184548
	W1025 09:46:37.112662  262001 cli_runner.go:211] docker network inspect addons-184548 returned with exit code 1
	I1025 09:46:37.112696  262001 network_create.go:287] error running [docker network inspect addons-184548]: docker network inspect addons-184548: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-184548 not found
	I1025 09:46:37.112711  262001 network_create.go:289] output of [docker network inspect addons-184548]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-184548 not found
	
	** /stderr **
	I1025 09:46:37.112820  262001 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:46:37.130736  262001 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400197e5b0}
	I1025 09:46:37.130776  262001 network_create.go:124] attempt to create docker network addons-184548 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 09:46:37.130842  262001 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-184548 addons-184548
	I1025 09:46:37.189293  262001 network_create.go:108] docker network addons-184548 192.168.49.0/24 created
	I1025 09:46:37.189332  262001 kic.go:121] calculated static IP "192.168.49.2" for the "addons-184548" container
	I1025 09:46:37.189405  262001 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:46:37.204558  262001 cli_runner.go:164] Run: docker volume create addons-184548 --label name.minikube.sigs.k8s.io=addons-184548 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:46:37.223677  262001 oci.go:103] Successfully created a docker volume addons-184548
	I1025 09:46:37.223768  262001 cli_runner.go:164] Run: docker run --rm --name addons-184548-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-184548 --entrypoint /usr/bin/test -v addons-184548:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:46:39.327210  262001 cli_runner.go:217] Completed: docker run --rm --name addons-184548-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-184548 --entrypoint /usr/bin/test -v addons-184548:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.103401574s)
	I1025 09:46:39.327253  262001 oci.go:107] Successfully prepared a docker volume addons-184548
	I1025 09:46:39.327285  262001 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:46:39.327306  262001 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:46:39.327367  262001 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-184548:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 09:46:43.790608  262001 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-184548:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.463197984s)
	I1025 09:46:43.790640  262001 kic.go:203] duration metric: took 4.463331163s to extract preloaded images to volume ...
	W1025 09:46:43.790787  262001 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 09:46:43.790889  262001 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:46:43.845544  262001 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-184548 --name addons-184548 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-184548 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-184548 --network addons-184548 --ip 192.168.49.2 --volume addons-184548:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:46:44.153265  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Running}}
	I1025 09:46:44.183122  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:46:44.207472  262001 cli_runner.go:164] Run: docker exec addons-184548 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:46:44.265188  262001 oci.go:144] the created container "addons-184548" has a running status.
	I1025 09:46:44.265219  262001 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa...
	I1025 09:46:45.052696  262001 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:46:45.077254  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:46:45.097434  262001 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:46:45.097456  262001 kic_runner.go:114] Args: [docker exec --privileged addons-184548 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:46:45.169513  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:46:45.202667  262001 machine.go:93] provisionDockerMachine start ...
	I1025 09:46:45.204389  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:46:45.239505  262001 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:45.239884  262001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1025 09:46:45.239910  262001 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:46:45.240763  262001 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 09:46:48.389631  262001 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-184548
	
	I1025 09:46:48.389657  262001 ubuntu.go:182] provisioning hostname "addons-184548"
	I1025 09:46:48.389959  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:46:48.415500  262001 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:48.415825  262001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1025 09:46:48.415845  262001 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-184548 && echo "addons-184548" | sudo tee /etc/hostname
	I1025 09:46:48.571490  262001 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-184548
	
	I1025 09:46:48.571584  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:46:48.589303  262001 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:48.589612  262001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1025 09:46:48.589635  262001 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-184548' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-184548/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-184548' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:46:48.738139  262001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:46:48.738163  262001 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 09:46:48.738182  262001 ubuntu.go:190] setting up certificates
	I1025 09:46:48.738192  262001 provision.go:84] configureAuth start
	I1025 09:46:48.738253  262001 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-184548
	I1025 09:46:48.755924  262001 provision.go:143] copyHostCerts
	I1025 09:46:48.756017  262001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 09:46:48.756150  262001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 09:46:48.756219  262001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 09:46:48.756270  262001 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.addons-184548 san=[127.0.0.1 192.168.49.2 addons-184548 localhost minikube]
	I1025 09:46:49.069873  262001 provision.go:177] copyRemoteCerts
	I1025 09:46:49.069938  262001 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:46:49.069999  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:46:49.087428  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:46:49.189872  262001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:46:49.207405  262001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 09:46:49.226290  262001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:46:49.244537  262001 provision.go:87] duration metric: took 506.321462ms to configureAuth
	I1025 09:46:49.244566  262001 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:46:49.244759  262001 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:46:49.244874  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:46:49.262954  262001 main.go:141] libmachine: Using SSH client type: native
	I1025 09:46:49.263302  262001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1025 09:46:49.263323  262001 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:46:49.521119  262001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:46:49.521143  262001 machine.go:96] duration metric: took 4.316842388s to provisionDockerMachine
	I1025 09:46:49.521154  262001 client.go:171] duration metric: took 13.590528688s to LocalClient.Create
	I1025 09:46:49.521167  262001 start.go:167] duration metric: took 13.590591335s to libmachine.API.Create "addons-184548"
	I1025 09:46:49.521175  262001 start.go:293] postStartSetup for "addons-184548" (driver="docker")
	I1025 09:46:49.521185  262001 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:46:49.521259  262001 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:46:49.521299  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:46:49.540081  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:46:49.646496  262001 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:46:49.649944  262001 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:46:49.649975  262001 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:46:49.650007  262001 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 09:46:49.650086  262001 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 09:46:49.650115  262001 start.go:296] duration metric: took 128.934716ms for postStartSetup
	I1025 09:46:49.650441  262001 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-184548
	I1025 09:46:49.668051  262001 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/config.json ...
	I1025 09:46:49.668337  262001 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:46:49.668386  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:46:49.684919  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:46:49.786965  262001 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:46:49.791690  262001 start.go:128] duration metric: took 13.864815963s to createHost
	I1025 09:46:49.791713  262001 start.go:83] releasing machines lock for "addons-184548", held for 13.864960358s
	I1025 09:46:49.791788  262001 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-184548
	I1025 09:46:49.809011  262001 ssh_runner.go:195] Run: cat /version.json
	I1025 09:46:49.809067  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:46:49.809330  262001 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:46:49.809400  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:46:49.829044  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:46:49.836852  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:46:49.929781  262001 ssh_runner.go:195] Run: systemctl --version
	I1025 09:46:50.023136  262001 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:46:50.069153  262001 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:46:50.073746  262001 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:46:50.073828  262001 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:46:50.105565  262001 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 09:46:50.105642  262001 start.go:495] detecting cgroup driver to use...
	I1025 09:46:50.105712  262001 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 09:46:50.105792  262001 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:46:50.125319  262001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:46:50.138818  262001 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:46:50.138917  262001 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:46:50.156283  262001 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:46:50.175312  262001 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:46:50.288956  262001 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:46:50.411532  262001 docker.go:234] disabling docker service ...
	I1025 09:46:50.411616  262001 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:46:50.435605  262001 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:46:50.449662  262001 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:46:50.568766  262001 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:46:50.693403  262001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:46:50.705781  262001 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:46:50.720017  262001 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:46:50.720132  262001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:50.728549  262001 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:46:50.728657  262001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:50.737214  262001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:50.745676  262001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:50.754458  262001 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:46:50.762095  262001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:50.770982  262001 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:50.788951  262001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:46:50.797650  262001 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:46:50.804936  262001 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:46:50.812275  262001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:46:50.923320  262001 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:46:51.048496  262001 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:46:51.048583  262001 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:46:51.052466  262001 start.go:563] Will wait 60s for crictl version
	I1025 09:46:51.052528  262001 ssh_runner.go:195] Run: which crictl
	I1025 09:46:51.056056  262001 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:46:51.081881  262001 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:46:51.082022  262001 ssh_runner.go:195] Run: crio --version
	I1025 09:46:51.113909  262001 ssh_runner.go:195] Run: crio --version
	I1025 09:46:51.150964  262001 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:46:51.153867  262001 cli_runner.go:164] Run: docker network inspect addons-184548 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:46:51.171563  262001 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 09:46:51.175883  262001 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:46:51.186404  262001 kubeadm.go:883] updating cluster {Name:addons-184548 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-184548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:46:51.186525  262001 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:46:51.186590  262001 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:46:51.221268  262001 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:46:51.221295  262001 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:46:51.221373  262001 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:46:51.247689  262001 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:46:51.247713  262001 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:46:51.247721  262001 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1025 09:46:51.247808  262001 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-184548 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-184548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:46:51.247892  262001 ssh_runner.go:195] Run: crio config
	I1025 09:46:51.322518  262001 cni.go:84] Creating CNI manager for ""
	I1025 09:46:51.322540  262001 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:46:51.322556  262001 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:46:51.322606  262001 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-184548 NodeName:addons-184548 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:46:51.322777  262001 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-184548"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:46:51.322877  262001 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:46:51.331139  262001 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:46:51.331210  262001 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:46:51.339127  262001 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1025 09:46:51.352931  262001 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:46:51.366284  262001 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1025 09:46:51.379245  262001 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:46:51.382872  262001 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:46:51.393279  262001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:46:51.519648  262001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:46:51.534951  262001 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548 for IP: 192.168.49.2
	I1025 09:46:51.535015  262001 certs.go:195] generating shared ca certs ...
	I1025 09:46:51.535048  262001 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:51.535205  262001 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 09:46:51.674885  262001 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt ...
	I1025 09:46:51.674920  262001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt: {Name:mk17b6c331a07a17ef84fde02319838a2ef3698b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:51.675147  262001 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key ...
	I1025 09:46:51.675162  262001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key: {Name:mkc75db84f781c6e360c2b5ee59238e50158dd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:51.675253  262001 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 09:46:53.169401  262001 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt ...
	I1025 09:46:53.169433  262001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt: {Name:mk6204789820541fbec61e8b3338e45bfbabb8eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:53.169603  262001 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key ...
	I1025 09:46:53.169619  262001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key: {Name:mk03dd36e25909c48f80a11b0608190f600537f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:53.169694  262001 certs.go:257] generating profile certs ...
	I1025 09:46:53.169755  262001 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.key
	I1025 09:46:53.169776  262001 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt with IP's: []
	I1025 09:46:54.423246  262001 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt ...
	I1025 09:46:54.423278  262001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: {Name:mk17665482a38819e487cea64dec596148ccbdad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:54.423464  262001 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.key ...
	I1025 09:46:54.423477  262001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.key: {Name:mk7016254a354af6a06fafff6e0189bc8732f0ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:54.423562  262001 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/apiserver.key.9f359fbd
	I1025 09:46:54.423583  262001 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/apiserver.crt.9f359fbd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1025 09:46:54.550325  262001 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/apiserver.crt.9f359fbd ...
	I1025 09:46:54.550357  262001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/apiserver.crt.9f359fbd: {Name:mk2a068cc8085d4305be8fdd0e0e528d7c5187c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:54.550522  262001 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/apiserver.key.9f359fbd ...
	I1025 09:46:54.550536  262001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/apiserver.key.9f359fbd: {Name:mk88472b91a7a8d5389456f0257638c3f1be3f40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:54.550634  262001 certs.go:382] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/apiserver.crt.9f359fbd -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/apiserver.crt
	I1025 09:46:54.550726  262001 certs.go:386] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/apiserver.key.9f359fbd -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/apiserver.key
	I1025 09:46:54.550782  262001 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/proxy-client.key
	I1025 09:46:54.550804  262001 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/proxy-client.crt with IP's: []
	I1025 09:46:54.793614  262001 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/proxy-client.crt ...
	I1025 09:46:54.793646  262001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/proxy-client.crt: {Name:mk0f2992407286a8eb37719eeb18c6ecc353fe65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:54.793820  262001 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/proxy-client.key ...
	I1025 09:46:54.793835  262001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/proxy-client.key: {Name:mk4b9a90f48e0226c6f68f80a7710e3117e55c95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:54.794048  262001 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:46:54.794097  262001 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:46:54.794126  262001 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:46:54.794157  262001 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 09:46:54.794726  262001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:46:54.812457  262001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 09:46:54.831363  262001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:46:54.849780  262001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 09:46:54.866631  262001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 09:46:54.883537  262001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 09:46:54.900536  262001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:46:54.916750  262001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 09:46:54.934038  262001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:46:54.951449  262001 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:46:54.964598  262001 ssh_runner.go:195] Run: openssl version
	I1025 09:46:54.970745  262001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:46:54.979436  262001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:46:54.983084  262001 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:46:54.983182  262001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:46:55.024845  262001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:46:55.035295  262001 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:46:55.039499  262001 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:46:55.039550  262001 kubeadm.go:400] StartCluster: {Name:addons-184548 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-184548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:46:55.039627  262001 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:46:55.039693  262001 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:46:55.072139  262001 cri.go:89] found id: ""
	I1025 09:46:55.072287  262001 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:46:55.080462  262001 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:46:55.088777  262001 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:46:55.088863  262001 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:46:55.096964  262001 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:46:55.097033  262001 kubeadm.go:157] found existing configuration files:
	
	I1025 09:46:55.097138  262001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:46:55.105093  262001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:46:55.105186  262001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:46:55.113349  262001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:46:55.123394  262001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:46:55.123479  262001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:46:55.131062  262001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:46:55.139008  262001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:46:55.139117  262001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:46:55.146910  262001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:46:55.155408  262001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:46:55.155511  262001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:46:55.163613  262001 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:46:55.228034  262001 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1025 09:46:55.228279  262001 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 09:46:55.297471  262001 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:47:12.323653  262001 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:47:12.323718  262001 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:47:12.323809  262001 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:47:12.323877  262001 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 09:47:12.323918  262001 kubeadm.go:318] OS: Linux
	I1025 09:47:12.323965  262001 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:47:12.324016  262001 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 09:47:12.324064  262001 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:47:12.324114  262001 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:47:12.324165  262001 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:47:12.324215  262001 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:47:12.324262  262001 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:47:12.324312  262001 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:47:12.324360  262001 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 09:47:12.324434  262001 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:47:12.324532  262001 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:47:12.324624  262001 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:47:12.324689  262001 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:47:12.327668  262001 out.go:252]   - Generating certificates and keys ...
	I1025 09:47:12.327772  262001 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:47:12.327894  262001 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:47:12.327982  262001 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:47:12.328047  262001 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:47:12.328117  262001 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:47:12.328171  262001 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:47:12.328229  262001 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:47:12.328353  262001 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-184548 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 09:47:12.328417  262001 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:47:12.328552  262001 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-184548 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 09:47:12.328627  262001 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:47:12.328701  262001 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:47:12.328755  262001 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:47:12.328832  262001 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:47:12.328962  262001 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:47:12.329042  262001 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:47:12.329107  262001 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:47:12.329175  262001 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:47:12.329236  262001 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:47:12.329345  262001 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:47:12.329422  262001 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 09:47:12.332579  262001 out.go:252]   - Booting up control plane ...
	I1025 09:47:12.332716  262001 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:47:12.332833  262001 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:47:12.332940  262001 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:47:12.333075  262001 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:47:12.333179  262001 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:47:12.333346  262001 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:47:12.333468  262001 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:47:12.333522  262001 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:47:12.333710  262001 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:47:12.333837  262001 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:47:12.333904  262001 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.501068458s
	I1025 09:47:12.334032  262001 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:47:12.334135  262001 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1025 09:47:12.334246  262001 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:47:12.334333  262001 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 09:47:12.334426  262001 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.518311674s
	I1025 09:47:12.334517  262001 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.42953607s
	I1025 09:47:12.334610  262001 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001373423s
	I1025 09:47:12.334755  262001 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:47:12.334921  262001 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:47:12.335008  262001 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:47:12.335247  262001 kubeadm.go:318] [mark-control-plane] Marking the node addons-184548 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:47:12.335330  262001 kubeadm.go:318] [bootstrap-token] Using token: 7qak07.9wrl07i3bus0m2or
	I1025 09:47:12.340215  262001 out.go:252]   - Configuring RBAC rules ...
	I1025 09:47:12.340345  262001 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:47:12.340440  262001 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:47:12.340600  262001 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:47:12.340739  262001 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:47:12.340865  262001 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:47:12.340958  262001 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:47:12.341080  262001 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:47:12.341129  262001 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:47:12.341181  262001 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:47:12.341190  262001 kubeadm.go:318] 
	I1025 09:47:12.341252  262001 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:47:12.341259  262001 kubeadm.go:318] 
	I1025 09:47:12.341370  262001 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:47:12.341428  262001 kubeadm.go:318] 
	I1025 09:47:12.341461  262001 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:47:12.341529  262001 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:47:12.341588  262001 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:47:12.341598  262001 kubeadm.go:318] 
	I1025 09:47:12.341655  262001 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:47:12.341663  262001 kubeadm.go:318] 
	I1025 09:47:12.341713  262001 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:47:12.341721  262001 kubeadm.go:318] 
	I1025 09:47:12.341776  262001 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:47:12.341859  262001 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:47:12.341936  262001 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:47:12.341945  262001 kubeadm.go:318] 
	I1025 09:47:12.342054  262001 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:47:12.342140  262001 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:47:12.342150  262001 kubeadm.go:318] 
	I1025 09:47:12.342238  262001 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 7qak07.9wrl07i3bus0m2or \
	I1025 09:47:12.342350  262001 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:537c864aeffbd98a53a718d258f7bd1f28aa4a0d15716971a35abdfdaaeb28d5 \
	I1025 09:47:12.342375  262001 kubeadm.go:318] 	--control-plane 
	I1025 09:47:12.342383  262001 kubeadm.go:318] 
	I1025 09:47:12.342472  262001 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:47:12.342480  262001 kubeadm.go:318] 
	I1025 09:47:12.342566  262001 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 7qak07.9wrl07i3bus0m2or \
	I1025 09:47:12.342690  262001 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:537c864aeffbd98a53a718d258f7bd1f28aa4a0d15716971a35abdfdaaeb28d5 
	I1025 09:47:12.342704  262001 cni.go:84] Creating CNI manager for ""
	I1025 09:47:12.342712  262001 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:47:12.345926  262001 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:47:12.349026  262001 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:47:12.354280  262001 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 09:47:12.354304  262001 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:47:12.369223  262001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:47:12.677027  262001 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:47:12.677239  262001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:47:12.677293  262001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-184548 minikube.k8s.io/updated_at=2025_10_25T09_47_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689 minikube.k8s.io/name=addons-184548 minikube.k8s.io/primary=true
	I1025 09:47:12.852071  262001 ops.go:34] apiserver oom_adj: -16
	I1025 09:47:12.852213  262001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:47:13.352991  262001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:47:13.852621  262001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:47:14.353267  262001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:47:14.852347  262001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:47:15.353210  262001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:47:15.852331  262001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:47:16.352346  262001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:47:16.460189  262001 kubeadm.go:1113] duration metric: took 3.783065612s to wait for elevateKubeSystemPrivileges
	I1025 09:47:16.460214  262001 kubeadm.go:402] duration metric: took 21.420667233s to StartCluster
	I1025 09:47:16.460232  262001 settings.go:142] acquiring lock: {Name:mk78071aea90144fb2e8f63b90de4792d7a3522a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:47:16.460339  262001 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 09:47:16.460740  262001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:47:16.460921  262001 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:47:16.461110  262001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 09:47:16.461380  262001 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:47:16.461439  262001 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1025 09:47:16.461541  262001 addons.go:69] Setting yakd=true in profile "addons-184548"
	I1025 09:47:16.461560  262001 addons.go:238] Setting addon yakd=true in "addons-184548"
	I1025 09:47:16.461583  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.462105  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.462655  262001 addons.go:69] Setting metrics-server=true in profile "addons-184548"
	I1025 09:47:16.462789  262001 addons.go:238] Setting addon metrics-server=true in "addons-184548"
	I1025 09:47:16.462822  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.462830  262001 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-184548"
	I1025 09:47:16.462848  262001 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-184548"
	I1025 09:47:16.462873  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.463250  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.463324  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.466101  262001 addons.go:69] Setting registry=true in profile "addons-184548"
	I1025 09:47:16.466133  262001 addons.go:238] Setting addon registry=true in "addons-184548"
	I1025 09:47:16.466181  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.466705  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.467373  262001 addons.go:69] Setting registry-creds=true in profile "addons-184548"
	I1025 09:47:16.493032  262001 addons.go:238] Setting addon registry-creds=true in "addons-184548"
	I1025 09:47:16.493081  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.493560  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.467388  262001 addons.go:69] Setting storage-provisioner=true in profile "addons-184548"
	I1025 09:47:16.510175  262001 addons.go:238] Setting addon storage-provisioner=true in "addons-184548"
	I1025 09:47:16.510216  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.510685  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.467395  262001 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-184548"
	I1025 09:47:16.511979  262001 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-184548"
	I1025 09:47:16.512291  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.467401  262001 addons.go:69] Setting volcano=true in profile "addons-184548"
	I1025 09:47:16.529400  262001 addons.go:238] Setting addon volcano=true in "addons-184548"
	I1025 09:47:16.529447  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.530004  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.467427  262001 addons.go:69] Setting volumesnapshots=true in profile "addons-184548"
	I1025 09:47:16.539791  262001 addons.go:238] Setting addon volumesnapshots=true in "addons-184548"
	I1025 09:47:16.491572  262001 addons.go:69] Setting cloud-spanner=true in profile "addons-184548"
	I1025 09:47:16.539828  262001 addons.go:238] Setting addon cloud-spanner=true in "addons-184548"
	I1025 09:47:16.539856  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.491613  262001 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-184548"
	I1025 09:47:16.540010  262001 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-184548"
	I1025 09:47:16.540029  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.491625  262001 addons.go:69] Setting default-storageclass=true in profile "addons-184548"
	I1025 09:47:16.540119  262001 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-184548"
	I1025 09:47:16.491632  262001 addons.go:69] Setting gcp-auth=true in profile "addons-184548"
	I1025 09:47:16.540197  262001 mustload.go:65] Loading cluster: addons-184548
	I1025 09:47:16.491638  262001 addons.go:69] Setting ingress=true in profile "addons-184548"
	I1025 09:47:16.540287  262001 addons.go:238] Setting addon ingress=true in "addons-184548"
	I1025 09:47:16.540311  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.491644  262001 addons.go:69] Setting ingress-dns=true in profile "addons-184548"
	I1025 09:47:16.540391  262001 addons.go:238] Setting addon ingress-dns=true in "addons-184548"
	I1025 09:47:16.540408  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.491666  262001 addons.go:69] Setting inspektor-gadget=true in profile "addons-184548"
	I1025 09:47:16.540495  262001 addons.go:238] Setting addon inspektor-gadget=true in "addons-184548"
	I1025 09:47:16.540508  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.491811  262001 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-184548"
	I1025 09:47:16.540581  262001 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-184548"
	I1025 09:47:16.540594  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.492462  262001 out.go:179] * Verifying Kubernetes components...
	I1025 09:47:16.551541  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.552929  262001 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:47:16.553242  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.553842  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.566045  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.566498  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.575435  262001 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1025 09:47:16.576011  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.578392  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.593861  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.598767  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.614243  262001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:47:16.617885  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.617909  262001 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 09:47:16.617925  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1025 09:47:16.618014  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:16.632821  262001 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1025 09:47:16.647650  262001 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1025 09:47:16.649900  262001 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1025 09:47:16.652737  262001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 09:47:16.652992  262001 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 09:47:16.653269  262001 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 09:47:16.653419  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:16.664729  262001 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1025 09:47:16.664761  262001 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1025 09:47:16.664861  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:16.697731  262001 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1025 09:47:16.698246  262001 out.go:179]   - Using image docker.io/registry:3.0.0
	I1025 09:47:16.737645  262001 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-184548"
	I1025 09:47:16.737803  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.748102  262001 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 09:47:16.748167  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1025 09:47:16.748289  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:16.773349  262001 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1025 09:47:16.773423  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1025 09:47:16.773504  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:16.797163  262001 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	W1025 09:47:16.798326  262001 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1025 09:47:16.800430  262001 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1025 09:47:16.800492  262001 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1025 09:47:16.800583  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:16.806057  262001 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:47:16.809218  262001 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:47:16.809282  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:47:16.809379  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:16.839620  262001 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1025 09:47:16.841829  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.842781  262001 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 09:47:16.842829  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1025 09:47:16.842902  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:16.866007  262001 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1025 09:47:16.868492  262001 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1025 09:47:16.868552  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1025 09:47:16.868638  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:16.885765  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.886082  262001 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1025 09:47:16.893664  262001 addons.go:238] Setting addon default-storageclass=true in "addons-184548"
	I1025 09:47:16.893710  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:16.897861  262001 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:47:16.898244  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:16.898654  262001 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1025 09:47:16.901964  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:16.903269  262001 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1025 09:47:16.906141  262001 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1025 09:47:16.906196  262001 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1025 09:47:16.906206  262001 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1025 09:47:16.906268  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:16.903359  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:16.925663  262001 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1025 09:47:16.928615  262001 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1025 09:47:16.935925  262001 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1025 09:47:16.938881  262001 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:47:16.939980  262001 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1025 09:47:16.942448  262001 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 09:47:16.942472  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1025 09:47:16.942538  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:16.950095  262001 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 09:47:16.950126  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1025 09:47:16.950194  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:16.958323  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:16.982453  262001 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1025 09:47:16.990243  262001 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1025 09:47:16.997725  262001 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1025 09:47:17.002740  262001 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1025 09:47:17.002830  262001 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1025 09:47:17.002955  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:17.012107  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:17.022531  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:17.048265  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:17.055842  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:17.057375  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:17.059630  262001 out.go:179]   - Using image docker.io/busybox:stable
	I1025 09:47:17.078142  262001 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1025 09:47:17.088065  262001 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 09:47:17.088111  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1025 09:47:17.088186  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:17.112754  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:17.126071  262001 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:47:17.126100  262001 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:47:17.126173  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:17.126538  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:17.135428  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:17.151347  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:17.154127  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:17.169566  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	W1025 09:47:17.171823  262001 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 09:47:17.171854  262001 retry.go:31] will retry after 132.984624ms: ssh: handshake failed: EOF
	W1025 09:47:17.172373  262001 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 09:47:17.172397  262001 retry.go:31] will retry after 304.151219ms: ssh: handshake failed: EOF
	W1025 09:47:17.172801  262001 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 09:47:17.172815  262001 retry.go:31] will retry after 244.858636ms: ssh: handshake failed: EOF
	W1025 09:47:17.173359  262001 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 09:47:17.173380  262001 retry.go:31] will retry after 323.234829ms: ssh: handshake failed: EOF
	I1025 09:47:17.182642  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:17.183318  262001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1025 09:47:17.477772  262001 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 09:47:17.477843  262001 retry.go:31] will retry after 541.807175ms: ssh: handshake failed: EOF
	I1025 09:47:17.717968  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 09:47:17.743604  262001 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 09:47:17.743677  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1025 09:47:17.793746  262001 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1025 09:47:17.793820  262001 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1025 09:47:17.882972  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:47:17.945548  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 09:47:17.974595  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1025 09:47:17.985213  262001 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1025 09:47:17.985289  262001 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1025 09:47:18.013356  262001 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1025 09:47:18.013443  262001 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1025 09:47:18.027584  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 09:47:18.044156  262001 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 09:47:18.044231  262001 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 09:47:18.058274  262001 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1025 09:47:18.058357  262001 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1025 09:47:18.140299  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:47:18.177245  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 09:47:18.185312  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 09:47:18.203307  262001 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:47:18.203375  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1025 09:47:18.206618  262001 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1025 09:47:18.206636  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1025 09:47:18.208836  262001 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 09:47:18.208854  262001 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 09:47:18.208916  262001 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1025 09:47:18.208921  262001 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1025 09:47:18.211998  262001 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1025 09:47:18.212020  262001 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1025 09:47:18.375378  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1025 09:47:18.395653  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 09:47:18.410483  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 09:47:18.435959  262001 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1025 09:47:18.436030  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1025 09:47:18.439536  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:47:18.441865  262001 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1025 09:47:18.441943  262001 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1025 09:47:18.516724  262001 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.333356975s)
	I1025 09:47:18.516829  262001 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.863624201s)
	I1025 09:47:18.517042  262001 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1025 09:47:18.518170  262001 node_ready.go:35] waiting up to 6m0s for node "addons-184548" to be "Ready" ...
	I1025 09:47:18.657251  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1025 09:47:18.681672  262001 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1025 09:47:18.681738  262001 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1025 09:47:18.884306  262001 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1025 09:47:18.884371  262001 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1025 09:47:18.929908  262001 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1025 09:47:18.929976  262001 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1025 09:47:19.024314  262001 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-184548" context rescaled to 1 replicas
	I1025 09:47:19.060543  262001 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 09:47:19.060569  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1025 09:47:19.208768  262001 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1025 09:47:19.208814  262001 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1025 09:47:19.335754  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.617629589s)
	I1025 09:47:19.347339  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 09:47:19.463280  262001 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1025 09:47:19.463308  262001 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1025 09:47:19.692617  262001 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1025 09:47:19.692643  262001 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1025 09:47:19.935307  262001 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1025 09:47:19.935381  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1025 09:47:20.126371  262001 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1025 09:47:20.126399  262001 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1025 09:47:20.379339  262001 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1025 09:47:20.379368  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	W1025 09:47:20.551446  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:20.578243  262001 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1025 09:47:20.578267  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1025 09:47:20.762115  262001 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 09:47:20.762198  262001 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1025 09:47:20.960208  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 09:47:21.943353  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.060301483s)
	I1025 09:47:21.943460  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.99783848s)
	I1025 09:47:21.943513  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.968833935s)
	I1025 09:47:21.943552  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.915895862s)
	I1025 09:47:21.943599  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.803234157s)
	I1025 09:47:21.943846  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.766522472s)
	I1025 09:47:22.911668  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.726273072s)
	I1025 09:47:22.911748  262001 addons.go:479] Verifying addon ingress=true in "addons-184548"
	I1025 09:47:22.912267  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.536806479s)
	I1025 09:47:22.912300  262001 addons.go:479] Verifying addon registry=true in "addons-184548"
	I1025 09:47:22.912327  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.472677073s)
	W1025 09:47:22.912368  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:22.912416  262001 retry.go:31] will retry after 185.956785ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:22.912634  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.516905503s)
	I1025 09:47:22.912655  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.56528628s)
	W1025 09:47:22.912682  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 09:47:22.912695  262001 retry.go:31] will retry after 295.663661ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 09:47:22.912718  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.502156172s)
	I1025 09:47:22.912730  262001 addons.go:479] Verifying addon metrics-server=true in "addons-184548"
	I1025 09:47:22.912636  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.255292132s)
	I1025 09:47:22.915270  262001 out.go:179] * Verifying ingress addon...
	I1025 09:47:22.917230  262001 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-184548 service yakd-dashboard -n yakd-dashboard
	
	I1025 09:47:22.917356  262001 out.go:179] * Verifying registry addon...
	I1025 09:47:22.921583  262001 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1025 09:47:22.921628  262001 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1025 09:47:22.930269  262001 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 09:47:22.930295  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:22.933482  262001 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1025 09:47:22.933507  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:23.023115  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:23.099041  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:47:23.208557  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 09:47:23.236433  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.27610319s)
	I1025 09:47:23.236471  262001 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-184548"
	I1025 09:47:23.239658  262001 out.go:179] * Verifying csi-hostpath-driver addon...
	I1025 09:47:23.243278  262001 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1025 09:47:23.253374  262001 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 09:47:23.253399  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:23.426668  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:23.426836  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:23.749193  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:23.927548  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:23.927894  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:24.143418  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.04433479s)
	W1025 09:47:24.143469  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:24.143502  262001 retry.go:31] will retry after 557.55518ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:24.247123  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:24.426922  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:24.427262  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:24.590642  262001 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1025 09:47:24.590722  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:24.609896  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:24.702135  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:47:24.740787  262001 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1025 09:47:24.747781  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:24.760791  262001 addons.go:238] Setting addon gcp-auth=true in "addons-184548"
	I1025 09:47:24.760881  262001 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:47:24.761386  262001 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:47:24.781662  262001 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1025 09:47:24.781733  262001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:47:24.800648  262001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:47:24.926679  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:24.927094  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:25.247190  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:25.427023  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:25.427895  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:25.522331  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:25.748945  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:25.926656  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:25.926956  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:26.071703  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.863042275s)
	I1025 09:47:26.071831  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.369661277s)
	W1025 09:47:26.071916  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:26.071942  262001 retry.go:31] will retry after 768.223432ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:26.071877  262001 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.290191446s)
	I1025 09:47:26.075150  262001 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1025 09:47:26.078123  262001 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:47:26.081070  262001 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1025 09:47:26.081110  262001 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1025 09:47:26.095644  262001 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1025 09:47:26.095667  262001 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1025 09:47:26.109929  262001 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 09:47:26.109956  262001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1025 09:47:26.124772  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 09:47:26.247602  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:26.426644  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:26.427014  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:26.592437  262001 addons.go:479] Verifying addon gcp-auth=true in "addons-184548"
	I1025 09:47:26.596639  262001 out.go:179] * Verifying gcp-auth addon...
	I1025 09:47:26.609423  262001 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1025 09:47:26.617406  262001 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1025 09:47:26.617433  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:26.751458  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:26.840903  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:47:26.926903  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:26.927065  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:27.113326  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:27.246530  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:27.426613  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:27.426950  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:27.613335  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:47:27.657352  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:27.657385  262001 retry.go:31] will retry after 1.097014533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:27.747904  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:27.925561  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:27.925889  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:28.022167  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:28.113162  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:28.247163  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:28.425557  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:28.425814  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:28.612883  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:28.746787  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:28.754859  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:47:28.926680  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:28.927683  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:29.113445  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:29.247666  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:29.426013  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:29.426783  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:29.571898  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:29.571939  262001 retry.go:31] will retry after 1.599607704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:29.612735  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:29.746870  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:29.924926  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:29.925075  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:30.022958  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:30.114230  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:30.247670  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:30.425147  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:30.425391  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:30.613350  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:30.746452  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:30.924475  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:30.924624  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:31.113763  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:31.171794  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:47:31.247499  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:31.425208  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:31.425435  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:31.613335  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:31.750155  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:31.926723  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:31.926868  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1025 09:47:32.019166  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:32.019195  262001 retry.go:31] will retry after 1.136319033s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:32.113602  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:32.247283  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:32.425481  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:32.426844  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:32.521782  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:32.612330  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:32.746292  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:32.925023  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:32.925215  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:33.113070  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:33.156135  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:47:33.246444  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:33.426482  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:33.426577  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:33.613116  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:33.749642  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:33.926927  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:33.927432  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:33.973511  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:33.973599  262001 retry.go:31] will retry after 3.205495295s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:34.113200  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:34.247203  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:34.425961  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:34.426203  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:34.613464  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:34.746654  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:34.925260  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:34.925267  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:35.022404  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:35.113009  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:35.246811  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:35.425359  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:35.425771  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:35.613546  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:35.746814  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:35.924998  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:35.925034  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:36.113710  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:36.246958  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:36.424898  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:36.425070  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:36.613195  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:36.745957  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:36.925748  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:36.925930  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:37.112905  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:37.180025  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:47:37.249843  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:37.425770  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:37.425972  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:37.530999  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:37.616991  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:37.750363  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:37.925209  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:37.927094  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:38.061449  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:38.061537  262001 retry.go:31] will retry after 4.791342408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:38.113518  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:38.246874  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:38.425172  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:38.425308  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:38.612475  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:38.746297  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:38.925763  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:38.925931  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:39.112981  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:39.247021  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:39.425571  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:39.425633  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:39.613389  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:39.746561  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:39.924880  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:39.925092  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:40.023097  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:40.113055  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:40.246924  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:40.426151  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:40.426323  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:40.612848  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:40.746618  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:40.925662  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:40.925866  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:41.113702  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:41.247023  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:41.425926  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:41.426403  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:41.612936  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:41.747369  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:41.925264  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:41.925664  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:42.113800  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:42.247448  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:42.425817  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:42.426039  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:42.521916  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:42.613045  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:42.746941  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:42.853504  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:47:42.924651  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:42.926193  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:43.113359  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:43.246964  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:43.427041  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:43.427438  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:43.613243  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:47:43.665029  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:43.665063  262001 retry.go:31] will retry after 6.961055561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:43.747511  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:43.925729  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:43.925972  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:44.112939  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:44.246932  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:44.425090  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:44.425656  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:44.613032  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:44.746963  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:44.925007  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:44.925238  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1025 09:47:45.029607  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:45.116853  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:45.248044  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:45.425792  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:45.426053  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:45.612944  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:45.747155  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:45.925277  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:45.925449  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:46.112840  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:46.246587  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:46.424448  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:46.424922  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:46.612798  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:46.747022  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:46.925152  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:46.925417  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:47.113771  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:47.246941  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:47.425112  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:47.425288  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:47.521060  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:47.612906  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:47.746931  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:47.924957  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:47.925354  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:48.113497  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:48.246556  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:48.425876  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:48.425937  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:48.613364  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:48.746219  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:48.925693  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:48.926108  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:49.112645  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:49.252332  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:49.425693  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:49.426067  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:49.522031  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:49.612650  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:49.746644  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:49.924904  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:49.925160  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:50.112788  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:50.246724  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:50.425269  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:50.425535  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:50.613031  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:50.627205  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:47:50.746181  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:50.927573  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:50.928037  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:51.114723  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:51.250519  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:51.425241  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:51.425822  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:51.447333  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:51.447368  262001 retry.go:31] will retry after 13.336991842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:47:51.613697  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:51.746741  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:51.925374  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:51.925544  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:52.021678  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:52.113818  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:52.246878  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:52.425415  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:52.426382  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:52.613101  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:52.747315  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:52.924737  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:52.925012  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:53.114138  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:53.247302  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:53.425884  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:53.425927  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:53.612937  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:53.746827  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:53.925363  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:53.925443  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1025 09:47:54.021833  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:54.113063  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:54.247171  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:54.425471  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:54.425680  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:54.613178  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:54.746954  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:54.925366  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:54.926091  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:55.113330  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:55.246248  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:55.425319  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:55.426559  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:55.613435  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:55.746082  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:55.925328  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:55.925442  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:56.112852  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:56.246809  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:56.424968  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:56.425572  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:47:56.521509  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:56.613505  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:56.746729  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:56.924921  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:56.925004  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:57.112745  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:57.246920  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:57.425069  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:57.425165  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:57.612934  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:57.746734  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:57.925036  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:57.926023  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:58.113317  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:58.247172  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:58.425574  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:58.425685  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1025 09:47:58.521803  262001 node_ready.go:57] node "addons-184548" has "Ready":"False" status (will retry)
	I1025 09:47:58.612783  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:58.794733  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:58.935859  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:58.936002  262001 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 09:47:58.936019  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:59.027423  262001 node_ready.go:49] node "addons-184548" is "Ready"
	I1025 09:47:59.027454  262001 node_ready.go:38] duration metric: took 40.509259018s for node "addons-184548" to be "Ready" ...
	I1025 09:47:59.027468  262001 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:47:59.027527  262001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:47:59.067839  262001 api_server.go:72] duration metric: took 42.60689062s to wait for apiserver process to appear ...
	I1025 09:47:59.067866  262001 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:47:59.067887  262001 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 09:47:59.077397  262001 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1025 09:47:59.108157  262001 api_server.go:141] control plane version: v1.34.1
	I1025 09:47:59.108191  262001 api_server.go:131] duration metric: took 40.317466ms to wait for apiserver health ...
	I1025 09:47:59.108202  262001 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:47:59.214814  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:59.215448  262001 system_pods.go:59] 19 kube-system pods found
	I1025 09:47:59.215499  262001 system_pods.go:61] "coredns-66bc5c9577-hq8d8" [5f9e2449-9a59-40bb-9e50-c090419fd504] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:47:59.215507  262001 system_pods.go:61] "csi-hostpath-attacher-0" [044ac472-ff62-437c-ba08-8aa8f5d30315] Pending
	I1025 09:47:59.215520  262001 system_pods.go:61] "csi-hostpath-resizer-0" [60ca0fd7-d65f-4c71-941e-82f8ec090187] Pending
	I1025 09:47:59.215529  262001 system_pods.go:61] "csi-hostpathplugin-4jzcx" [bf0f0faf-4e29-4ab0-ba77-2e407b834b53] Pending
	I1025 09:47:59.215533  262001 system_pods.go:61] "etcd-addons-184548" [32a1291f-4f04-44e5-847e-ecfe895fb1c3] Running
	I1025 09:47:59.215548  262001 system_pods.go:61] "kindnet-dn6n8" [3c3ad1a4-a426-4593-b520-3ddacbbcedbb] Running
	I1025 09:47:59.215553  262001 system_pods.go:61] "kube-apiserver-addons-184548" [bf7b7404-bdc0-4bde-bc5e-78b6c046427b] Running
	I1025 09:47:59.215557  262001 system_pods.go:61] "kube-controller-manager-addons-184548" [b201b806-15b2-4190-8310-a15384162b02] Running
	I1025 09:47:59.215569  262001 system_pods.go:61] "kube-ingress-dns-minikube" [065b8ebc-6b4a-4aef-81e0-455096dd9765] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:47:59.215575  262001 system_pods.go:61] "kube-proxy-clv7b" [98d8d78b-4b77-49e2-b0bb-0ffc37ef6b7d] Running
	I1025 09:47:59.215587  262001 system_pods.go:61] "kube-scheduler-addons-184548" [384e707f-4d85-4826-965d-60352bad843a] Running
	I1025 09:47:59.215596  262001 system_pods.go:61] "metrics-server-85b7d694d7-5mbb4" [473fd1dc-bcd8-4299-ae17-9e3bca061756] Pending
	I1025 09:47:59.215603  262001 system_pods.go:61] "nvidia-device-plugin-daemonset-7sktv" [d6a26aea-18d0-46d2-a809-bf7ec95759f6] Pending
	I1025 09:47:59.215617  262001 system_pods.go:61] "registry-6b586f9694-cft48" [3b3f9d6f-cbd8-4b92-987f-b61c282e6860] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:47:59.215628  262001 system_pods.go:61] "registry-creds-764b6fb674-dk8fg" [5d6b936c-c964-41cd-a147-a05337379ebc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:47:59.215646  262001 system_pods.go:61] "registry-proxy-l4vs6" [2d113e11-a239-4418-8da7-40a53e33fd75] Pending
	I1025 09:47:59.215651  262001 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2bqhf" [d754d289-7438-4581-aec3-5f6119282c1f] Pending
	I1025 09:47:59.215666  262001 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rlnlm" [a9f57908-6cfb-4029-b95f-56dcc3188ca2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:47:59.215679  262001 system_pods.go:61] "storage-provisioner" [6cfd6874-b84e-4d5a-8a54-04f7e12dbfcb] Pending
	I1025 09:47:59.215689  262001 system_pods.go:74] duration metric: took 107.477871ms to wait for pod list to return data ...
	I1025 09:47:59.215697  262001 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:47:59.233450  262001 default_sa.go:45] found service account: "default"
	I1025 09:47:59.233480  262001 default_sa.go:55] duration metric: took 17.776204ms for default service account to be created ...
	I1025 09:47:59.233498  262001 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:47:59.251651  262001 system_pods.go:86] 19 kube-system pods found
	I1025 09:47:59.251689  262001 system_pods.go:89] "coredns-66bc5c9577-hq8d8" [5f9e2449-9a59-40bb-9e50-c090419fd504] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:47:59.251699  262001 system_pods.go:89] "csi-hostpath-attacher-0" [044ac472-ff62-437c-ba08-8aa8f5d30315] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:47:59.251705  262001 system_pods.go:89] "csi-hostpath-resizer-0" [60ca0fd7-d65f-4c71-941e-82f8ec090187] Pending
	I1025 09:47:59.251711  262001 system_pods.go:89] "csi-hostpathplugin-4jzcx" [bf0f0faf-4e29-4ab0-ba77-2e407b834b53] Pending
	I1025 09:47:59.251715  262001 system_pods.go:89] "etcd-addons-184548" [32a1291f-4f04-44e5-847e-ecfe895fb1c3] Running
	I1025 09:47:59.251719  262001 system_pods.go:89] "kindnet-dn6n8" [3c3ad1a4-a426-4593-b520-3ddacbbcedbb] Running
	I1025 09:47:59.251723  262001 system_pods.go:89] "kube-apiserver-addons-184548" [bf7b7404-bdc0-4bde-bc5e-78b6c046427b] Running
	I1025 09:47:59.251729  262001 system_pods.go:89] "kube-controller-manager-addons-184548" [b201b806-15b2-4190-8310-a15384162b02] Running
	I1025 09:47:59.251740  262001 system_pods.go:89] "kube-ingress-dns-minikube" [065b8ebc-6b4a-4aef-81e0-455096dd9765] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:47:59.251744  262001 system_pods.go:89] "kube-proxy-clv7b" [98d8d78b-4b77-49e2-b0bb-0ffc37ef6b7d] Running
	I1025 09:47:59.251753  262001 system_pods.go:89] "kube-scheduler-addons-184548" [384e707f-4d85-4826-965d-60352bad843a] Running
	I1025 09:47:59.251757  262001 system_pods.go:89] "metrics-server-85b7d694d7-5mbb4" [473fd1dc-bcd8-4299-ae17-9e3bca061756] Pending
	I1025 09:47:59.251761  262001 system_pods.go:89] "nvidia-device-plugin-daemonset-7sktv" [d6a26aea-18d0-46d2-a809-bf7ec95759f6] Pending
	I1025 09:47:59.251775  262001 system_pods.go:89] "registry-6b586f9694-cft48" [3b3f9d6f-cbd8-4b92-987f-b61c282e6860] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:47:59.251781  262001 system_pods.go:89] "registry-creds-764b6fb674-dk8fg" [5d6b936c-c964-41cd-a147-a05337379ebc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:47:59.251785  262001 system_pods.go:89] "registry-proxy-l4vs6" [2d113e11-a239-4418-8da7-40a53e33fd75] Pending
	I1025 09:47:59.251791  262001 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2bqhf" [d754d289-7438-4581-aec3-5f6119282c1f] Pending
	I1025 09:47:59.251796  262001 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rlnlm" [a9f57908-6cfb-4029-b95f-56dcc3188ca2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:47:59.251800  262001 system_pods.go:89] "storage-provisioner" [6cfd6874-b84e-4d5a-8a54-04f7e12dbfcb] Pending
	I1025 09:47:59.251814  262001 retry.go:31] will retry after 267.072515ms: missing components: kube-dns
	I1025 09:47:59.261286  262001 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 09:47:59.261319  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:59.438120  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:59.448434  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:47:59.567641  262001 system_pods.go:86] 19 kube-system pods found
	I1025 09:47:59.567684  262001 system_pods.go:89] "coredns-66bc5c9577-hq8d8" [5f9e2449-9a59-40bb-9e50-c090419fd504] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:47:59.567699  262001 system_pods.go:89] "csi-hostpath-attacher-0" [044ac472-ff62-437c-ba08-8aa8f5d30315] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:47:59.567713  262001 system_pods.go:89] "csi-hostpath-resizer-0" [60ca0fd7-d65f-4c71-941e-82f8ec090187] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 09:47:59.567719  262001 system_pods.go:89] "csi-hostpathplugin-4jzcx" [bf0f0faf-4e29-4ab0-ba77-2e407b834b53] Pending
	I1025 09:47:59.567730  262001 system_pods.go:89] "etcd-addons-184548" [32a1291f-4f04-44e5-847e-ecfe895fb1c3] Running
	I1025 09:47:59.567736  262001 system_pods.go:89] "kindnet-dn6n8" [3c3ad1a4-a426-4593-b520-3ddacbbcedbb] Running
	I1025 09:47:59.567748  262001 system_pods.go:89] "kube-apiserver-addons-184548" [bf7b7404-bdc0-4bde-bc5e-78b6c046427b] Running
	I1025 09:47:59.567757  262001 system_pods.go:89] "kube-controller-manager-addons-184548" [b201b806-15b2-4190-8310-a15384162b02] Running
	I1025 09:47:59.567771  262001 system_pods.go:89] "kube-ingress-dns-minikube" [065b8ebc-6b4a-4aef-81e0-455096dd9765] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:47:59.567780  262001 system_pods.go:89] "kube-proxy-clv7b" [98d8d78b-4b77-49e2-b0bb-0ffc37ef6b7d] Running
	I1025 09:47:59.567789  262001 system_pods.go:89] "kube-scheduler-addons-184548" [384e707f-4d85-4826-965d-60352bad843a] Running
	I1025 09:47:59.567800  262001 system_pods.go:89] "metrics-server-85b7d694d7-5mbb4" [473fd1dc-bcd8-4299-ae17-9e3bca061756] Pending
	I1025 09:47:59.567823  262001 system_pods.go:89] "nvidia-device-plugin-daemonset-7sktv" [d6a26aea-18d0-46d2-a809-bf7ec95759f6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 09:47:59.567838  262001 system_pods.go:89] "registry-6b586f9694-cft48" [3b3f9d6f-cbd8-4b92-987f-b61c282e6860] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:47:59.567845  262001 system_pods.go:89] "registry-creds-764b6fb674-dk8fg" [5d6b936c-c964-41cd-a147-a05337379ebc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:47:59.567857  262001 system_pods.go:89] "registry-proxy-l4vs6" [2d113e11-a239-4418-8da7-40a53e33fd75] Pending
	I1025 09:47:59.567864  262001 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2bqhf" [d754d289-7438-4581-aec3-5f6119282c1f] Pending
	I1025 09:47:59.567872  262001 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rlnlm" [a9f57908-6cfb-4029-b95f-56dcc3188ca2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:47:59.567882  262001 system_pods.go:89] "storage-provisioner" [6cfd6874-b84e-4d5a-8a54-04f7e12dbfcb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:47:59.567901  262001 retry.go:31] will retry after 243.206318ms: missing components: kube-dns
	I1025 09:47:59.659533  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:47:59.758096  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:47:59.827085  262001 system_pods.go:86] 19 kube-system pods found
	I1025 09:47:59.827126  262001 system_pods.go:89] "coredns-66bc5c9577-hq8d8" [5f9e2449-9a59-40bb-9e50-c090419fd504] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:47:59.827145  262001 system_pods.go:89] "csi-hostpath-attacher-0" [044ac472-ff62-437c-ba08-8aa8f5d30315] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:47:59.827159  262001 system_pods.go:89] "csi-hostpath-resizer-0" [60ca0fd7-d65f-4c71-941e-82f8ec090187] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 09:47:59.827169  262001 system_pods.go:89] "csi-hostpathplugin-4jzcx" [bf0f0faf-4e29-4ab0-ba77-2e407b834b53] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 09:47:59.827188  262001 system_pods.go:89] "etcd-addons-184548" [32a1291f-4f04-44e5-847e-ecfe895fb1c3] Running
	I1025 09:47:59.827199  262001 system_pods.go:89] "kindnet-dn6n8" [3c3ad1a4-a426-4593-b520-3ddacbbcedbb] Running
	I1025 09:47:59.827204  262001 system_pods.go:89] "kube-apiserver-addons-184548" [bf7b7404-bdc0-4bde-bc5e-78b6c046427b] Running
	I1025 09:47:59.827209  262001 system_pods.go:89] "kube-controller-manager-addons-184548" [b201b806-15b2-4190-8310-a15384162b02] Running
	I1025 09:47:59.827232  262001 system_pods.go:89] "kube-ingress-dns-minikube" [065b8ebc-6b4a-4aef-81e0-455096dd9765] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:47:59.827237  262001 system_pods.go:89] "kube-proxy-clv7b" [98d8d78b-4b77-49e2-b0bb-0ffc37ef6b7d] Running
	I1025 09:47:59.827245  262001 system_pods.go:89] "kube-scheduler-addons-184548" [384e707f-4d85-4826-965d-60352bad843a] Running
	I1025 09:47:59.827258  262001 system_pods.go:89] "metrics-server-85b7d694d7-5mbb4" [473fd1dc-bcd8-4299-ae17-9e3bca061756] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:47:59.827268  262001 system_pods.go:89] "nvidia-device-plugin-daemonset-7sktv" [d6a26aea-18d0-46d2-a809-bf7ec95759f6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 09:47:59.827279  262001 system_pods.go:89] "registry-6b586f9694-cft48" [3b3f9d6f-cbd8-4b92-987f-b61c282e6860] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:47:59.827288  262001 system_pods.go:89] "registry-creds-764b6fb674-dk8fg" [5d6b936c-c964-41cd-a147-a05337379ebc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:47:59.827314  262001 system_pods.go:89] "registry-proxy-l4vs6" [2d113e11-a239-4418-8da7-40a53e33fd75] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 09:47:59.827321  262001 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2bqhf" [d754d289-7438-4581-aec3-5f6119282c1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:47:59.827335  262001 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rlnlm" [a9f57908-6cfb-4029-b95f-56dcc3188ca2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:47:59.827341  262001 system_pods.go:89] "storage-provisioner" [6cfd6874-b84e-4d5a-8a54-04f7e12dbfcb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:47:59.827363  262001 retry.go:31] will retry after 389.232968ms: missing components: kube-dns
	I1025 09:47:59.931395  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:47:59.931568  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:00.117815  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:00.240063  262001 system_pods.go:86] 19 kube-system pods found
	I1025 09:48:00.241234  262001 system_pods.go:89] "coredns-66bc5c9577-hq8d8" [5f9e2449-9a59-40bb-9e50-c090419fd504] Running
	I1025 09:48:00.241325  262001 system_pods.go:89] "csi-hostpath-attacher-0" [044ac472-ff62-437c-ba08-8aa8f5d30315] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:48:00.241356  262001 system_pods.go:89] "csi-hostpath-resizer-0" [60ca0fd7-d65f-4c71-941e-82f8ec090187] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 09:48:00.241400  262001 system_pods.go:89] "csi-hostpathplugin-4jzcx" [bf0f0faf-4e29-4ab0-ba77-2e407b834b53] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 09:48:00.241429  262001 system_pods.go:89] "etcd-addons-184548" [32a1291f-4f04-44e5-847e-ecfe895fb1c3] Running
	I1025 09:48:00.241456  262001 system_pods.go:89] "kindnet-dn6n8" [3c3ad1a4-a426-4593-b520-3ddacbbcedbb] Running
	I1025 09:48:00.241486  262001 system_pods.go:89] "kube-apiserver-addons-184548" [bf7b7404-bdc0-4bde-bc5e-78b6c046427b] Running
	I1025 09:48:00.241518  262001 system_pods.go:89] "kube-controller-manager-addons-184548" [b201b806-15b2-4190-8310-a15384162b02] Running
	I1025 09:48:00.241553  262001 system_pods.go:89] "kube-ingress-dns-minikube" [065b8ebc-6b4a-4aef-81e0-455096dd9765] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:48:00.241578  262001 system_pods.go:89] "kube-proxy-clv7b" [98d8d78b-4b77-49e2-b0bb-0ffc37ef6b7d] Running
	I1025 09:48:00.241606  262001 system_pods.go:89] "kube-scheduler-addons-184548" [384e707f-4d85-4826-965d-60352bad843a] Running
	I1025 09:48:00.241636  262001 system_pods.go:89] "metrics-server-85b7d694d7-5mbb4" [473fd1dc-bcd8-4299-ae17-9e3bca061756] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:48:00.241667  262001 system_pods.go:89] "nvidia-device-plugin-daemonset-7sktv" [d6a26aea-18d0-46d2-a809-bf7ec95759f6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 09:48:00.241700  262001 system_pods.go:89] "registry-6b586f9694-cft48" [3b3f9d6f-cbd8-4b92-987f-b61c282e6860] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:48:00.241731  262001 system_pods.go:89] "registry-creds-764b6fb674-dk8fg" [5d6b936c-c964-41cd-a147-a05337379ebc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:48:00.241763  262001 system_pods.go:89] "registry-proxy-l4vs6" [2d113e11-a239-4418-8da7-40a53e33fd75] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 09:48:00.241800  262001 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2bqhf" [d754d289-7438-4581-aec3-5f6119282c1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:48:00.241835  262001 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rlnlm" [a9f57908-6cfb-4029-b95f-56dcc3188ca2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:48:00.241879  262001 system_pods.go:89] "storage-provisioner" [6cfd6874-b84e-4d5a-8a54-04f7e12dbfcb] Running
	I1025 09:48:00.241913  262001 system_pods.go:126] duration metric: took 1.008403877s to wait for k8s-apps to be running ...
	I1025 09:48:00.243419  262001 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:48:00.243585  262001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:48:00.315628  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:00.331162  262001 system_svc.go:56] duration metric: took 87.747408ms WaitForService to wait for kubelet
	I1025 09:48:00.331202  262001 kubeadm.go:586] duration metric: took 43.870258278s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:48:00.331225  262001 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:48:00.348566  262001 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 09:48:00.348672  262001 node_conditions.go:123] node cpu capacity is 2
	I1025 09:48:00.348715  262001 node_conditions.go:105] duration metric: took 17.478775ms to run NodePressure ...
	I1025 09:48:00.348768  262001 start.go:241] waiting for startup goroutines ...
	I1025 09:48:00.432415  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:00.432455  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:00.616168  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:00.747301  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:00.927256  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:00.927701  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:01.113079  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:01.247223  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:01.427807  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:01.427897  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:01.613213  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:01.747353  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:01.927923  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:01.928451  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:02.113788  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:02.247851  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:02.427645  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:02.428674  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:02.613224  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:02.746880  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:02.926695  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:02.927264  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:03.113556  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:03.247204  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:03.427516  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:03.428243  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:03.613049  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:03.747564  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:03.925930  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:03.926315  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:04.113181  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:04.246223  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:04.426127  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:04.426338  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:04.613186  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:04.747280  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:04.785417  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:48:04.925937  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:04.926332  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:05.113977  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:05.246550  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:05.425976  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:05.432470  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:48:05.608650  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:48:05.608685  262001 retry.go:31] will retry after 15.258673863s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:48:05.612685  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:05.746807  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:05.925380  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:05.925558  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:06.114111  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:06.247210  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:06.425588  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:06.426257  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:06.613715  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:06.749134  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:06.926858  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:06.927043  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:07.113480  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:07.247683  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:07.425002  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:07.425470  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:07.612450  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:07.746677  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:07.925335  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:07.925742  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:08.112803  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:08.247152  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:08.425516  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:08.425816  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:08.612851  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:08.747337  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:08.925877  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:08.926142  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:09.113127  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:09.246162  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:09.425780  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:09.426250  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:09.613371  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:09.746375  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:09.925967  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:09.926238  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:10.112949  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:10.246984  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:10.424972  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:10.425461  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:10.613129  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:10.748377  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:10.926105  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:10.926773  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:11.114544  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:11.247311  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:11.425197  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:11.425433  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:11.613773  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:11.746852  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:11.926324  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:11.926473  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:12.113200  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:12.247516  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:12.425354  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:12.425776  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:12.612419  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:12.747644  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:12.926831  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:12.927268  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:13.115030  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:13.247826  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:13.427110  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:13.427579  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:13.613323  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:13.746804  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:13.926334  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:13.926486  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:14.113550  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:14.246531  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:14.425311  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:14.425768  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:14.612641  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:14.746828  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:14.925915  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:14.926071  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:15.113402  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:15.247034  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:15.426259  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:15.427828  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:15.613081  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:15.747037  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:15.926695  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:15.926825  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:16.112697  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:16.246729  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:16.425848  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:16.425944  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:16.612891  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:16.747141  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:16.926406  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:16.926683  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:17.113151  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:17.246058  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:17.426021  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:17.426805  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:17.613589  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:17.746912  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:17.926994  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:17.927772  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:18.113156  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:18.246894  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:18.426309  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:18.427019  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:18.613306  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:18.746336  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:18.926881  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:18.927120  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:19.113466  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:19.247457  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:19.425971  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:19.426278  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:19.626929  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:19.748228  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:19.926696  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:19.927029  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:20.113977  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:20.247921  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:20.426710  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:20.426935  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:20.612959  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:20.748664  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:20.868040  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:48:20.927075  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:20.927496  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:21.113672  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:21.247124  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:21.428720  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:21.429255  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:21.614155  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:21.746534  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:21.926415  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:21.926598  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:22.113379  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:22.247267  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:22.390227  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.522146782s)
	W1025 09:48:22.390264  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:48:22.390285  262001 retry.go:31] will retry after 18.197966339s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:48:22.427397  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:22.427800  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:22.612931  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:22.747203  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:22.925977  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:22.926145  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:23.113347  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:23.247571  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:23.426417  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:23.426518  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:23.613926  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:23.747800  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:23.925156  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:23.926164  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:24.114276  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:24.246668  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:24.429757  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:24.430900  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:24.614226  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:24.746440  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:24.924953  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:24.925055  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:25.119282  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:25.246745  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:25.426040  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:25.427219  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:25.614114  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:25.749575  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:25.925042  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:25.925530  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:26.112512  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:26.258482  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:26.427018  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:26.427385  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:26.613865  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:26.748794  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:26.925554  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:26.925650  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:27.114806  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:27.247449  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:27.425177  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:27.425352  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:27.614014  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:27.747783  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:27.926043  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:27.926298  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:28.117363  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:28.251729  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:28.425746  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:28.426072  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:28.612740  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:28.747563  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:28.925520  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:28.925709  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:29.127027  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:29.251915  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:29.427423  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:29.427825  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:29.614037  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:29.746996  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:29.926847  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:29.926923  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:30.114675  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:30.257179  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:30.428692  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:30.429209  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:30.615623  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:30.751171  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:30.931224  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:30.931329  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:31.115528  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:31.255456  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:31.428765  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:31.428841  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:31.656366  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:31.760792  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:31.929168  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:31.929964  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:32.113021  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:32.252943  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:32.426950  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:32.427096  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:32.612966  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:32.747603  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:32.925304  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:32.925766  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:33.112687  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:33.247137  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:33.425744  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:33.425914  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:33.612784  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:33.748790  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:33.926844  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:33.927189  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:34.113381  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:34.247469  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:34.426622  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:34.427830  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:34.614464  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:34.747258  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:34.926642  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:34.927134  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:35.113409  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:35.246988  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:35.425269  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:35.425371  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:35.613363  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:35.748805  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:35.925817  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:35.926705  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:36.112328  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:36.246761  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:36.426384  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:36.426579  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:36.612669  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:36.746918  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:36.926242  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:36.926896  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:37.115570  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:37.247156  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:37.426014  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:37.426414  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:37.613232  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:37.747587  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:37.925351  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:37.925434  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:38.113263  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:38.247798  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:38.431149  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:38.431560  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:38.612914  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:38.747174  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:38.926288  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:38.926468  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:39.112553  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:39.247226  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:39.425826  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:39.426153  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:39.613349  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:39.746998  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:39.926170  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:39.927097  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:40.113834  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:40.247250  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:40.426501  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:40.427171  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:40.589430  262001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:48:40.613318  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:40.746894  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:40.926498  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:40.926746  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:41.116344  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:41.247181  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:41.428133  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:41.428503  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:41.612937  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:41.793819  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:41.928589  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:41.928924  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:42.115502  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:42.123450  262001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.533977391s)
	W1025 09:48:42.123521  262001 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:48:42.123640  262001 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1025 09:48:42.248796  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:42.424978  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:42.425133  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:42.613865  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:42.747748  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:42.927187  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:42.928789  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:43.124296  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:43.247091  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:43.427564  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:43.427965  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:43.613556  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:43.749010  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:43.927189  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:43.927599  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:44.113496  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:44.247981  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:44.426732  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:44.426902  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:44.613191  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:44.746916  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:44.929748  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:44.929903  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:45.114188  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:45.247219  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:45.430110  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:45.430378  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:45.614094  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:45.747426  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:45.928277  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:45.928810  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:46.112703  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:46.247092  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:46.426165  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:46.426942  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:46.613255  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:46.747191  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:46.926861  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:46.927113  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:47.113354  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:47.247020  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:47.426567  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:47.426830  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:47.613073  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:47.747229  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:47.926534  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:47.926796  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:48.113309  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:48.247243  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:48.426436  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:48.426807  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:48.612653  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:48.751278  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:48.925635  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:48.926500  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:49.113307  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:49.246747  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:49.426587  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:49.427041  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:49.613401  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:49.748410  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:49.928642  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:49.928793  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:50.112830  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:50.246928  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:50.427226  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:50.427667  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:50.612473  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:50.746920  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:50.925006  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:50.927354  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:51.113710  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:51.258274  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:51.426371  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:51.426457  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:51.613576  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:51.746860  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:51.927026  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:51.927208  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:52.113949  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:52.247611  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:52.427631  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:52.428154  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:52.612894  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:52.749184  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:52.925955  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:52.926530  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:53.112650  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:53.248150  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:53.430017  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:53.430227  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:53.613287  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:53.746615  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:53.925595  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:53.925695  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:54.113043  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:54.247029  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:54.426068  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:54.425886  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:54.612995  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:54.747940  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:54.925600  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:54.925769  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:55.112588  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:55.246879  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:55.427132  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:55.427962  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:55.613342  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:55.747211  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:55.926392  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:55.926992  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:56.113159  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:56.246590  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:56.426548  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:56.427849  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:56.614613  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:56.747484  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:56.926514  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:56.926969  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:48:57.113849  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:57.247309  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:57.427289  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:57.428066  262001 kapi.go:107] duration metric: took 1m34.506487077s to wait for kubernetes.io/minikube-addons=registry ...
	I1025 09:48:57.613188  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:57.748422  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:57.926661  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:58.113170  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:58.246468  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:58.424965  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:58.613123  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:58.746078  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:58.925068  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:59.113069  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:59.247648  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:59.425765  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:48:59.612875  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:48:59.747357  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:48:59.929479  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:00.118257  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:00.248376  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:00.428716  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:00.613664  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:00.746676  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:00.927049  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:01.113713  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:01.249174  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:01.428675  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:01.613389  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:01.746951  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:01.929661  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:02.113382  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:02.248624  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:02.426195  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:02.614243  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:02.765489  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:02.939702  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:03.115551  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:03.250798  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:03.425879  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:03.616676  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:03.747755  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:03.925224  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:04.115876  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:04.247644  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:04.427501  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:04.612598  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:04.748165  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:04.926157  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:05.113142  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:05.247501  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:05.426896  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:05.612723  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:05.746776  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:05.925738  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:06.113665  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:06.247368  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:06.426060  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:06.613376  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:06.752837  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:06.925404  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:07.113978  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:07.247838  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:07.425221  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:07.613525  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:07.746789  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:07.925738  262001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:49:08.112981  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:08.248938  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:08.440245  262001 kapi.go:107] duration metric: took 1m45.51861226s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1025 09:49:08.613575  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:08.746862  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:09.113048  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:09.247243  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:09.613744  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:09.747322  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:10.114022  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:10.247472  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:10.613422  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:10.746872  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:11.113742  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:11.247687  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:11.612875  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:11.751830  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:12.115447  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:49:12.250182  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:12.613412  262001 kapi.go:107] duration metric: took 1m46.003988966s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1025 09:49:12.617620  262001 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-184548 cluster.
	I1025 09:49:12.621169  262001 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1025 09:49:12.624570  262001 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1025 09:49:12.746660  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:13.247126  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:13.746777  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:14.248335  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:14.747003  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:15.246801  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:15.746909  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:16.246829  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:16.747542  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:17.247921  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:17.747040  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:18.247565  262001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:49:18.747701  262001 kapi.go:107] duration metric: took 1m55.504419152s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1025 09:49:18.750741  262001 out.go:179] * Enabled addons: nvidia-device-plugin, storage-provisioner, registry-creds, cloud-spanner, amd-gpu-device-plugin, ingress-dns, default-storageclass, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1025 09:49:18.753713  262001 addons.go:514] duration metric: took 2m2.292256316s for enable addons: enabled=[nvidia-device-plugin storage-provisioner registry-creds cloud-spanner amd-gpu-device-plugin ingress-dns default-storageclass metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1025 09:49:18.753764  262001 start.go:246] waiting for cluster config update ...
	I1025 09:49:18.753787  262001 start.go:255] writing updated cluster config ...
	I1025 09:49:18.754139  262001 ssh_runner.go:195] Run: rm -f paused
	I1025 09:49:18.758757  262001 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:49:18.762396  262001 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hq8d8" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:49:18.769490  262001 pod_ready.go:94] pod "coredns-66bc5c9577-hq8d8" is "Ready"
	I1025 09:49:18.769518  262001 pod_ready.go:86] duration metric: took 7.095622ms for pod "coredns-66bc5c9577-hq8d8" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:49:18.773299  262001 pod_ready.go:83] waiting for pod "etcd-addons-184548" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:49:18.778439  262001 pod_ready.go:94] pod "etcd-addons-184548" is "Ready"
	I1025 09:49:18.778468  262001 pod_ready.go:86] duration metric: took 5.143177ms for pod "etcd-addons-184548" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:49:18.780883  262001 pod_ready.go:83] waiting for pod "kube-apiserver-addons-184548" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:49:18.786105  262001 pod_ready.go:94] pod "kube-apiserver-addons-184548" is "Ready"
	I1025 09:49:18.786139  262001 pod_ready.go:86] duration metric: took 5.22961ms for pod "kube-apiserver-addons-184548" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:49:18.788722  262001 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-184548" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:49:19.163299  262001 pod_ready.go:94] pod "kube-controller-manager-addons-184548" is "Ready"
	I1025 09:49:19.163331  262001 pod_ready.go:86] duration metric: took 374.58361ms for pod "kube-controller-manager-addons-184548" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:49:19.363033  262001 pod_ready.go:83] waiting for pod "kube-proxy-clv7b" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:49:19.762585  262001 pod_ready.go:94] pod "kube-proxy-clv7b" is "Ready"
	I1025 09:49:19.762617  262001 pod_ready.go:86] duration metric: took 399.557695ms for pod "kube-proxy-clv7b" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:49:19.979389  262001 pod_ready.go:83] waiting for pod "kube-scheduler-addons-184548" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:49:20.363300  262001 pod_ready.go:94] pod "kube-scheduler-addons-184548" is "Ready"
	I1025 09:49:20.363329  262001 pod_ready.go:86] duration metric: took 383.902094ms for pod "kube-scheduler-addons-184548" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:49:20.363341  262001 pod_ready.go:40] duration metric: took 1.60454694s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:49:20.433751  262001 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 09:49:20.439743  262001 out.go:179] * Done! kubectl is now configured to use "addons-184548" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 09:49:50 addons-184548 crio[826]: time="2025-10-25T09:49:50.188999649Z" level=info msg="Started container" PID=5346 containerID=fa2045bdd42e704c94ee71869907433a109bd0a1b916c3a6fd42c97a1ad34a22 description=default/test-local-path/busybox id=a7c279ae-c6ae-460d-87f1-e288a7f64e39 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d5ca5d26225e52283642641e7bd6dfbbf5b86b20aea6e16cb28f6a91135a1def
	Oct 25 09:49:51 addons-184548 crio[826]: time="2025-10-25T09:49:51.614948353Z" level=info msg="Stopping pod sandbox: d5ca5d26225e52283642641e7bd6dfbbf5b86b20aea6e16cb28f6a91135a1def" id=b12d4a5c-3d28-4455-b27d-1e8b8f2cbf2a name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:49:51 addons-184548 crio[826]: time="2025-10-25T09:49:51.615264672Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:d5ca5d26225e52283642641e7bd6dfbbf5b86b20aea6e16cb28f6a91135a1def UID:4de165a4-67d3-48fd-8b7a-18a8715dbd9a NetNS:/var/run/netns/9ef3d42d-4537-4ddd-82e9-0782482b94ea Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001080798}] Aliases:map[]}"
	Oct 25 09:49:51 addons-184548 crio[826]: time="2025-10-25T09:49:51.615410265Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Oct 25 09:49:51 addons-184548 crio[826]: time="2025-10-25T09:49:51.640047033Z" level=info msg="Stopped pod sandbox: d5ca5d26225e52283642641e7bd6dfbbf5b86b20aea6e16cb28f6a91135a1def" id=b12d4a5c-3d28-4455-b27d-1e8b8f2cbf2a name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:49:53 addons-184548 crio[826]: time="2025-10-25T09:49:53.112054998Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-d640ae38-7d8c-48d1-83d2-821c86fec4ed/POD" id=6e17daa1-d9c4-43e0-977c-400c4c864334 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:49:53 addons-184548 crio[826]: time="2025-10-25T09:49:53.112200994Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:49:53 addons-184548 crio[826]: time="2025-10-25T09:49:53.130558657Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-d640ae38-7d8c-48d1-83d2-821c86fec4ed Namespace:local-path-storage ID:a530a770ba1f142e7c2d37dde2aa67694cc63330ae33007923e8eff69c7d551f UID:bc5cf292-8082-4bc8-908e-ffb6f606e8d9 NetNS:/var/run/netns/43c0cee8-95a6-45d9-8553-c94205571a6f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001081090}] Aliases:map[]}"
	Oct 25 09:49:53 addons-184548 crio[826]: time="2025-10-25T09:49:53.131767341Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-d640ae38-7d8c-48d1-83d2-821c86fec4ed to CNI network \"kindnet\" (type=ptp)"
	Oct 25 09:49:53 addons-184548 crio[826]: time="2025-10-25T09:49:53.141681777Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-d640ae38-7d8c-48d1-83d2-821c86fec4ed Namespace:local-path-storage ID:a530a770ba1f142e7c2d37dde2aa67694cc63330ae33007923e8eff69c7d551f UID:bc5cf292-8082-4bc8-908e-ffb6f606e8d9 NetNS:/var/run/netns/43c0cee8-95a6-45d9-8553-c94205571a6f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001081090}] Aliases:map[]}"
	Oct 25 09:49:53 addons-184548 crio[826]: time="2025-10-25T09:49:53.142384831Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-d640ae38-7d8c-48d1-83d2-821c86fec4ed for CNI network kindnet (type=ptp)"
	Oct 25 09:49:53 addons-184548 crio[826]: time="2025-10-25T09:49:53.145784074Z" level=info msg="Ran pod sandbox a530a770ba1f142e7c2d37dde2aa67694cc63330ae33007923e8eff69c7d551f with infra container: local-path-storage/helper-pod-delete-pvc-d640ae38-7d8c-48d1-83d2-821c86fec4ed/POD" id=6e17daa1-d9c4-43e0-977c-400c4c864334 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:49:53 addons-184548 crio[826]: time="2025-10-25T09:49:53.154796258Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=bb42957e-d1b8-4181-b445-932dda6e056d name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:49:53 addons-184548 crio[826]: time="2025-10-25T09:49:53.157768872Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=af4ffe8c-75b2-4760-83b6-0486fc9a3509 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:49:53 addons-184548 crio[826]: time="2025-10-25T09:49:53.167260543Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-d640ae38-7d8c-48d1-83d2-821c86fec4ed/helper-pod" id=df033e1c-ff6f-42c7-aab6-9320cc0f4d18 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:49:53 addons-184548 crio[826]: time="2025-10-25T09:49:53.16768499Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:49:53 addons-184548 crio[826]: time="2025-10-25T09:49:53.175397847Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:49:53 addons-184548 crio[826]: time="2025-10-25T09:49:53.176029943Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:49:53 addons-184548 crio[826]: time="2025-10-25T09:49:53.198523193Z" level=info msg="Created container 65a99594951e64586b2d56e5cb3de5814f9f362bcb98f36e0a2e1ba4208a016e: local-path-storage/helper-pod-delete-pvc-d640ae38-7d8c-48d1-83d2-821c86fec4ed/helper-pod" id=df033e1c-ff6f-42c7-aab6-9320cc0f4d18 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:49:53 addons-184548 crio[826]: time="2025-10-25T09:49:53.201635697Z" level=info msg="Starting container: 65a99594951e64586b2d56e5cb3de5814f9f362bcb98f36e0a2e1ba4208a016e" id=f2034c63-de56-43c2-93f0-b03a326b0f72 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:49:53 addons-184548 crio[826]: time="2025-10-25T09:49:53.20548621Z" level=info msg="Started container" PID=5434 containerID=65a99594951e64586b2d56e5cb3de5814f9f362bcb98f36e0a2e1ba4208a016e description=local-path-storage/helper-pod-delete-pvc-d640ae38-7d8c-48d1-83d2-821c86fec4ed/helper-pod id=f2034c63-de56-43c2-93f0-b03a326b0f72 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a530a770ba1f142e7c2d37dde2aa67694cc63330ae33007923e8eff69c7d551f
	Oct 25 09:49:54 addons-184548 crio[826]: time="2025-10-25T09:49:54.637217123Z" level=info msg="Stopping pod sandbox: a530a770ba1f142e7c2d37dde2aa67694cc63330ae33007923e8eff69c7d551f" id=9f88c148-45ba-4600-b941-3d2075d37978 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:49:54 addons-184548 crio[826]: time="2025-10-25T09:49:54.637595547Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-d640ae38-7d8c-48d1-83d2-821c86fec4ed Namespace:local-path-storage ID:a530a770ba1f142e7c2d37dde2aa67694cc63330ae33007923e8eff69c7d551f UID:bc5cf292-8082-4bc8-908e-ffb6f606e8d9 NetNS:/var/run/netns/43c0cee8-95a6-45d9-8553-c94205571a6f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079cc8}] Aliases:map[]}"
	Oct 25 09:49:54 addons-184548 crio[826]: time="2025-10-25T09:49:54.63779184Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-d640ae38-7d8c-48d1-83d2-821c86fec4ed from CNI network \"kindnet\" (type=ptp)"
	Oct 25 09:49:54 addons-184548 crio[826]: time="2025-10-25T09:49:54.662589807Z" level=info msg="Stopped pod sandbox: a530a770ba1f142e7c2d37dde2aa67694cc63330ae33007923e8eff69c7d551f" id=9f88c148-45ba-4600-b941-3d2075d37978 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	65a99594951e6       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             1 second ago         Exited              helper-pod                               0                   a530a770ba1f1       helper-pod-delete-pvc-d640ae38-7d8c-48d1-83d2-821c86fec4ed   local-path-storage
	fa2045bdd42e7       docker.io/library/busybox@sha256:aefc3a378c4cf11a6d85071438d3bf7634633a34c6a68d4c5f928516d556c366                                            4 seconds ago        Exited              busybox                                  0                   d5ca5d26225e5       test-local-path                                              default
	56cd3cf912239       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            9 seconds ago        Exited              helper-pod                               0                   661d6cdf02ab7       helper-pod-create-pvc-d640ae38-7d8c-48d1-83d2-821c86fec4ed   local-path-storage
	104b5117bce7c       gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9                                          10 seconds ago       Exited              registry-test                            0                   136330afd51b2       registry-test                                                default
	cb5ecb8477046       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          31 seconds ago       Running             busybox                                  0                   2641306d8a1cc       busybox                                                      default
	99ffce70564e9       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          37 seconds ago       Running             csi-snapshotter                          0                   c8cf2b2a99b36       csi-hostpathplugin-4jzcx                                     kube-system
	87380913bd0ee       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          38 seconds ago       Running             csi-provisioner                          0                   c8cf2b2a99b36       csi-hostpathplugin-4jzcx                                     kube-system
	917acafd89879       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            40 seconds ago       Running             liveness-probe                           0                   c8cf2b2a99b36       csi-hostpathplugin-4jzcx                                     kube-system
	1511789ef92ec       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           41 seconds ago       Running             hostpath                                 0                   c8cf2b2a99b36       csi-hostpathplugin-4jzcx                                     kube-system
	c854ff59a87cb       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 42 seconds ago       Running             gcp-auth                                 0                   62cba53c1d7cf       gcp-auth-78565c9fb4-ljkhx                                    gcp-auth
	fe07be820f29e       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                46 seconds ago       Running             node-driver-registrar                    0                   c8cf2b2a99b36       csi-hostpathplugin-4jzcx                                     kube-system
	eecab95d02e84       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             47 seconds ago       Running             controller                               0                   a999dce303784       ingress-nginx-controller-675c5ddd98-kn8cf                    ingress-nginx
	d7eb5ef695f14       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            54 seconds ago       Running             gadget                                   0                   62f1638fd598f       gadget-wc7b2                                                 gadget
	353ca1bd95855       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              58 seconds ago       Running             registry-proxy                           0                   6063dc7ae5ad8       registry-proxy-l4vs6                                         kube-system
	5c51c82f38056       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   cf0c5231b71f4       nvidia-device-plugin-daemonset-7sktv                         kube-system
	b431f883c99c6       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             About a minute ago   Exited              patch                                    2                   96fca6288ceab       gcp-auth-certs-patch-8fn9t                                   gcp-auth
	c1f4c2cd01a80       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   00c41a7e28d97       yakd-dashboard-5ff678cb9-kjxsl                               yakd-dashboard
	25d99e542d8e7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              patch                                    0                   980d0fe304064       ingress-nginx-admission-patch-cl6qb                          ingress-nginx
	d4fb23cf89dec       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   c8cf2b2a99b36       csi-hostpathplugin-4jzcx                                     kube-system
	6064a19490c79       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   6e77312721790       csi-hostpath-resizer-0                                       kube-system
	7c76fe020b3e5       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   dcb56990daf90       csi-hostpath-attacher-0                                      kube-system
	7e5a14fca747b       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   f3e5086272c34       metrics-server-85b7d694d7-5mbb4                              kube-system
	d554dffae9bef       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   16f10c65efc8d       local-path-provisioner-648f6765c9-nv5k2                      local-path-storage
	9e7e539bbba98       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   d7bce8529595d       snapshot-controller-7d9fbc56b8-rlnlm                         kube-system
	8697134e7e473       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              create                                   0                   915f3eea07fad       ingress-nginx-admission-create-bmfm4                         ingress-nginx
	0a744f4822b1c       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   a58c40e3d670d       snapshot-controller-7d9fbc56b8-2bqhf                         kube-system
	97501aeea8896       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   17b8c7bda4bc6       registry-6b586f9694-cft48                                    kube-system
	fb43fde6f7081       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   55c093c8fb7c9       kube-ingress-dns-minikube                                    kube-system
	cb279419abc6e       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   db9b35ce5f1fd       cloud-spanner-emulator-86bd5cbb97-wv5rr                      default
	01b285afd5854       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   d57e84c4d76af       coredns-66bc5c9577-hq8d8                                     kube-system
	6dea6b6abd8b2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   6512c98e5c3b1       storage-provisioner                                          kube-system
	ea0a2c59127ed       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   c4e73032c3bac       kube-proxy-clv7b                                             kube-system
	7c269e9ecf5ba       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   462c9337ebadf       kindnet-dn6n8                                                kube-system
	a9c067b0e9c58       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   cfac145b13a4e       kube-scheduler-addons-184548                                 kube-system
	50b3905935f0c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   2279cb9db3497       kube-controller-manager-addons-184548                        kube-system
	fc1be8cbffe43       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   6884eb0b93e43       kube-apiserver-addons-184548                                 kube-system
	703663e8a09cc       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   dd8e387d48f11       etcd-addons-184548                                           kube-system
	
	
	==> coredns [01b285afd58548299d42eb8fb367c019acd34eedf67eecee676c84578ed69e6f] <==
	[INFO] 10.244.0.15:59347 - 49194 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002466246s
	[INFO] 10.244.0.15:59347 - 13041 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000151903s
	[INFO] 10.244.0.15:59347 - 37803 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000245081s
	[INFO] 10.244.0.15:49398 - 62277 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000159411s
	[INFO] 10.244.0.15:49398 - 62040 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000088001s
	[INFO] 10.244.0.15:47620 - 36644 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00010332s
	[INFO] 10.244.0.15:47620 - 36447 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000073379s
	[INFO] 10.244.0.15:42826 - 8681 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000085884s
	[INFO] 10.244.0.15:42826 - 8228 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000214377s
	[INFO] 10.244.0.15:39379 - 15507 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001713583s
	[INFO] 10.244.0.15:39379 - 15687 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001836874s
	[INFO] 10.244.0.15:39287 - 23563 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000141966s
	[INFO] 10.244.0.15:39287 - 23415 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000230435s
	[INFO] 10.244.0.21:45010 - 18529 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000199435s
	[INFO] 10.244.0.21:49727 - 64868 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000145248s
	[INFO] 10.244.0.21:54291 - 38536 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000189212s
	[INFO] 10.244.0.21:55103 - 46761 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000181072s
	[INFO] 10.244.0.21:43650 - 11242 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00017482s
	[INFO] 10.244.0.21:43503 - 51774 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000138652s
	[INFO] 10.244.0.21:57499 - 50078 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004108066s
	[INFO] 10.244.0.21:46437 - 46495 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004284461s
	[INFO] 10.244.0.21:33416 - 62670 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001695515s
	[INFO] 10.244.0.21:39424 - 393 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001080535s
	[INFO] 10.244.0.23:60892 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000147792s
	[INFO] 10.244.0.23:42473 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000178816s
	
	
	==> describe nodes <==
	Name:               addons-184548
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-184548
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=addons-184548
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_47_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-184548
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-184548"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:47:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-184548
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:49:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:49:44 +0000   Sat, 25 Oct 2025 09:47:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:49:44 +0000   Sat, 25 Oct 2025 09:47:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:49:44 +0000   Sat, 25 Oct 2025 09:47:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:49:44 +0000   Sat, 25 Oct 2025 09:47:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-184548
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                ba66d0db-65f5-42cb-b217-b8f2184e05a9
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  default                     cloud-spanner-emulator-86bd5cbb97-wv5rr      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m35s
	  gadget                      gadget-wc7b2                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  gcp-auth                    gcp-auth-78565c9fb4-ljkhx                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-kn8cf    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m33s
	  kube-system                 coredns-66bc5c9577-hq8d8                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m38s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 csi-hostpathplugin-4jzcx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 etcd-addons-184548                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m43s
	  kube-system                 kindnet-dn6n8                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m38s
	  kube-system                 kube-apiserver-addons-184548                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m43s
	  kube-system                 kube-controller-manager-addons-184548        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m43s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-proxy-clv7b                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 kube-scheduler-addons-184548                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m45s
	  kube-system                 metrics-server-85b7d694d7-5mbb4              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m34s
	  kube-system                 nvidia-device-plugin-daemonset-7sktv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 registry-6b586f9694-cft48                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 registry-creds-764b6fb674-dk8fg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 registry-proxy-l4vs6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 snapshot-controller-7d9fbc56b8-2bqhf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 snapshot-controller-7d9fbc56b8-rlnlm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m34s
	  local-path-storage          local-path-provisioner-648f6765c9-nv5k2      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-kjxsl               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m36s                  kube-proxy       
	  Warning  CgroupV1                 2m51s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m51s (x8 over 2m51s)  kubelet          Node addons-184548 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m51s (x8 over 2m51s)  kubelet          Node addons-184548 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m51s (x8 over 2m51s)  kubelet          Node addons-184548 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m44s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m44s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m44s                  kubelet          Node addons-184548 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m44s                  kubelet          Node addons-184548 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m44s                  kubelet          Node addons-184548 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m39s                  node-controller  Node addons-184548 event: Registered Node addons-184548 in Controller
	  Normal   NodeReady                117s                   kubelet          Node addons-184548 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct25 09:23] overlayfs: idmapped layers are currently not supported
	[Oct25 09:24] overlayfs: idmapped layers are currently not supported
	[Oct25 09:28] overlayfs: idmapped layers are currently not supported
	[ +37.283444] overlayfs: idmapped layers are currently not supported
	[Oct25 09:29] overlayfs: idmapped layers are currently not supported
	[ +38.328802] overlayfs: idmapped layers are currently not supported
	[Oct25 09:30] overlayfs: idmapped layers are currently not supported
	[Oct25 09:31] overlayfs: idmapped layers are currently not supported
	[Oct25 09:32] overlayfs: idmapped layers are currently not supported
	[Oct25 09:33] overlayfs: idmapped layers are currently not supported
	[Oct25 09:34] overlayfs: idmapped layers are currently not supported
	[ +28.289706] overlayfs: idmapped layers are currently not supported
	[Oct25 09:35] overlayfs: idmapped layers are currently not supported
	[Oct25 09:36] overlayfs: idmapped layers are currently not supported
	[ +24.160248] overlayfs: idmapped layers are currently not supported
	[Oct25 09:37] overlayfs: idmapped layers are currently not supported
	[  +8.216028] overlayfs: idmapped layers are currently not supported
	[Oct25 09:38] overlayfs: idmapped layers are currently not supported
	[Oct25 09:39] overlayfs: idmapped layers are currently not supported
	[Oct25 09:41] overlayfs: idmapped layers are currently not supported
	[ +14.126672] overlayfs: idmapped layers are currently not supported
	[Oct25 09:42] overlayfs: idmapped layers are currently not supported
	[Oct25 09:43] overlayfs: idmapped layers are currently not supported
	[Oct25 09:45] kauditd_printk_skb: 8 callbacks suppressed
	[Oct25 09:47] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [703663e8a09cce37f7cec1e00ab2f874bfe4b4e49042f73b80b547840b775ef2] <==
	{"level":"warn","ts":"2025-10-25T09:47:07.523690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.552971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.581617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.605803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.643184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.669005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.715431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.730738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.758858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.786818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.804679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.830238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.855994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.891887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.924971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.948379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:07.980769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:08.005593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:08.122727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:23.674159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:23.685708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:45.997630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:46.015694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:46.078033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:47:46.092191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54402","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [c854ff59a87cb31257a3fa6b2393f211d9391c2200b70a5dce42efd5a674150a] <==
	2025/10/25 09:49:12 GCP Auth Webhook started!
	2025/10/25 09:49:20 Ready to marshal response ...
	2025/10/25 09:49:20 Ready to write response ...
	2025/10/25 09:49:21 Ready to marshal response ...
	2025/10/25 09:49:21 Ready to write response ...
	2025/10/25 09:49:21 Ready to marshal response ...
	2025/10/25 09:49:21 Ready to write response ...
	2025/10/25 09:49:42 Ready to marshal response ...
	2025/10/25 09:49:42 Ready to write response ...
	2025/10/25 09:49:43 Ready to marshal response ...
	2025/10/25 09:49:43 Ready to write response ...
	2025/10/25 09:49:43 Ready to marshal response ...
	2025/10/25 09:49:43 Ready to write response ...
	2025/10/25 09:49:52 Ready to marshal response ...
	2025/10/25 09:49:52 Ready to write response ...
	
	
	==> kernel <==
	 09:49:55 up  1:32,  0 user,  load average: 2.41, 2.76, 3.08
	Linux addons-184548 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7c269e9ecf5ba5e7cc6f9e493dbabf1e77ceb647f08b3a5c353d7a702b6a7ced] <==
	I1025 09:47:49.830059       1 controller.go:711] "Syncing nftables rules"
	I1025 09:47:58.231138       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:47:58.231193       1 main.go:301] handling current node
	I1025 09:48:08.234141       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:48:08.234205       1 main.go:301] handling current node
	I1025 09:48:18.231188       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:48:18.231215       1 main.go:301] handling current node
	I1025 09:48:28.227497       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:48:28.227536       1 main.go:301] handling current node
	I1025 09:48:38.228195       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:48:38.228235       1 main.go:301] handling current node
	I1025 09:48:48.230112       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:48:48.230150       1 main.go:301] handling current node
	I1025 09:48:58.228130       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:48:58.228169       1 main.go:301] handling current node
	I1025 09:49:08.227577       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:49:08.227620       1 main.go:301] handling current node
	I1025 09:49:18.230172       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:49:18.230293       1 main.go:301] handling current node
	I1025 09:49:28.229192       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:49:28.229234       1 main.go:301] handling current node
	I1025 09:49:38.230146       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:49:38.230186       1 main.go:301] handling current node
	I1025 09:49:48.227842       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:49:48.227899       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fc1be8cbffe437e2dcdce4f9f5911e6c06e22826e143a64d41168e9a51ae5c09] <==
	W1025 09:47:46.092015       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1025 09:47:58.776101       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.89.81:443: connect: connection refused
	E1025 09:47:58.776154       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.89.81:443: connect: connection refused" logger="UnhandledError"
	W1025 09:47:58.779641       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.89.81:443: connect: connection refused
	E1025 09:47:58.779673       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.89.81:443: connect: connection refused" logger="UnhandledError"
	W1025 09:47:58.822451       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.89.81:443: connect: connection refused
	E1025 09:47:58.822503       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.89.81:443: connect: connection refused" logger="UnhandledError"
	W1025 09:48:22.027848       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 09:48:22.027915       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1025 09:48:22.027940       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1025 09:48:22.030238       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 09:48:22.030329       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1025 09:48:22.030340       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1025 09:48:32.223321       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.55.176:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.55.176:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.55.176:443: connect: connection refused" logger="UnhandledError"
	W1025 09:48:32.223891       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 09:48:32.224090       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1025 09:48:32.224994       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.55.176:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.55.176:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.55.176:443: connect: connection refused" logger="UnhandledError"
	I1025 09:48:32.291049       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1025 09:49:30.788798       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57550: use of closed network connection
	
	
	==> kube-controller-manager [50b3905935f0c697a82075726d26cb056a12edbb7de740fac083e74c72edde90] <==
	I1025 09:47:16.019638       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-184548"
	I1025 09:47:16.019684       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 09:47:16.019902       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:47:16.020092       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 09:47:16.022568       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:47:16.022731       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:47:16.022976       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:47:16.023037       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:47:16.023254       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:47:16.023503       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 09:47:16.026556       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:47:16.028015       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 09:47:16.030962       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 09:47:16.032117       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 09:47:16.055410       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1025 09:47:45.989644       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1025 09:47:45.989811       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1025 09:47:45.989857       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1025 09:47:46.064921       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1025 09:47:46.069363       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1025 09:47:46.090418       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:47:46.169513       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:48:01.028970       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1025 09:48:16.096701       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1025 09:48:16.178605       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [ea0a2c59127ed1de3b2cc34db8da6794752aca3f85f3ca50dc4bbad88b24d692] <==
	I1025 09:47:18.106764       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:47:18.183558       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:47:18.283790       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:47:18.283829       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 09:47:18.283898       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:47:18.343658       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:47:18.343708       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:47:18.348691       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:47:18.352551       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:47:18.352575       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:47:18.353940       1 config.go:200] "Starting service config controller"
	I1025 09:47:18.353951       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:47:18.353967       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:47:18.353972       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:47:18.354192       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:47:18.354199       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:47:18.354818       1 config.go:309] "Starting node config controller"
	I1025 09:47:18.354826       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:47:18.354832       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:47:18.454466       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:47:18.454503       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:47:18.454560       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a9c067b0e9c58b7946603df485733ff618356cc6b10fde0fb7d121b3320d665a] <==
	E1025 09:47:09.350652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:47:09.350720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:47:09.350773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:47:09.354405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:47:09.354503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:47:09.354581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:47:09.354705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:47:09.354769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:47:09.354836       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:47:09.354900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:47:09.354964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:47:09.355058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:47:09.355124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:47:09.355187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:47:09.355234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:47:09.355285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:47:10.161730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:47:10.202164       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:47:10.333479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:47:10.344190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:47:10.346571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 09:47:10.357880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:47:10.392605       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:47:10.414656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1025 09:47:12.927927       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:49:51 addons-184548 kubelet[1280]: I1025 09:49:51.757941    1280 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4de165a4-67d3-48fd-8b7a-18a8715dbd9a-kube-api-access-ht8sp" (OuterVolumeSpecName: "kube-api-access-ht8sp") pod "4de165a4-67d3-48fd-8b7a-18a8715dbd9a" (UID: "4de165a4-67d3-48fd-8b7a-18a8715dbd9a"). InnerVolumeSpecName "kube-api-access-ht8sp". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 25 09:49:51 addons-184548 kubelet[1280]: I1025 09:49:51.852254    1280 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ht8sp\" (UniqueName: \"kubernetes.io/projected/4de165a4-67d3-48fd-8b7a-18a8715dbd9a-kube-api-access-ht8sp\") on node \"addons-184548\" DevicePath \"\""
	Oct 25 09:49:51 addons-184548 kubelet[1280]: I1025 09:49:51.852304    1280 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4de165a4-67d3-48fd-8b7a-18a8715dbd9a-gcp-creds\") on node \"addons-184548\" DevicePath \"\""
	Oct 25 09:49:51 addons-184548 kubelet[1280]: I1025 09:49:51.852319    1280 reconciler_common.go:299] "Volume detached for volume \"pvc-d640ae38-7d8c-48d1-83d2-821c86fec4ed\" (UniqueName: \"kubernetes.io/host-path/4de165a4-67d3-48fd-8b7a-18a8715dbd9a-pvc-d640ae38-7d8c-48d1-83d2-821c86fec4ed\") on node \"addons-184548\" DevicePath \"\""
	Oct 25 09:49:52 addons-184548 kubelet[1280]: I1025 09:49:52.624415    1280 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5ca5d26225e52283642641e7bd6dfbbf5b86b20aea6e16cb28f6a91135a1def"
	Oct 25 09:49:52 addons-184548 kubelet[1280]: E1025 09:49:52.629081    1280 status_manager.go:1018] "Failed to get status for pod" err="pods \"test-local-path\" is forbidden: User \"system:node:addons-184548\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-184548' and this object" podUID="4de165a4-67d3-48fd-8b7a-18a8715dbd9a" pod="default/test-local-path"
	Oct 25 09:49:52 addons-184548 kubelet[1280]: E1025 09:49:52.803086    1280 status_manager.go:1018] "Failed to get status for pod" err="pods \"test-local-path\" is forbidden: User \"system:node:addons-184548\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-184548' and this object" podUID="4de165a4-67d3-48fd-8b7a-18a8715dbd9a" pod="default/test-local-path"
	Oct 25 09:49:52 addons-184548 kubelet[1280]: I1025 09:49:52.962949    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg69c\" (UniqueName: \"kubernetes.io/projected/bc5cf292-8082-4bc8-908e-ffb6f606e8d9-kube-api-access-tg69c\") pod \"helper-pod-delete-pvc-d640ae38-7d8c-48d1-83d2-821c86fec4ed\" (UID: \"bc5cf292-8082-4bc8-908e-ffb6f606e8d9\") " pod="local-path-storage/helper-pod-delete-pvc-d640ae38-7d8c-48d1-83d2-821c86fec4ed"
	Oct 25 09:49:52 addons-184548 kubelet[1280]: I1025 09:49:52.963264    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/bc5cf292-8082-4bc8-908e-ffb6f606e8d9-script\") pod \"helper-pod-delete-pvc-d640ae38-7d8c-48d1-83d2-821c86fec4ed\" (UID: \"bc5cf292-8082-4bc8-908e-ffb6f606e8d9\") " pod="local-path-storage/helper-pod-delete-pvc-d640ae38-7d8c-48d1-83d2-821c86fec4ed"
	Oct 25 09:49:52 addons-184548 kubelet[1280]: I1025 09:49:52.963417    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/bc5cf292-8082-4bc8-908e-ffb6f606e8d9-gcp-creds\") pod \"helper-pod-delete-pvc-d640ae38-7d8c-48d1-83d2-821c86fec4ed\" (UID: \"bc5cf292-8082-4bc8-908e-ffb6f606e8d9\") " pod="local-path-storage/helper-pod-delete-pvc-d640ae38-7d8c-48d1-83d2-821c86fec4ed"
	Oct 25 09:49:52 addons-184548 kubelet[1280]: I1025 09:49:52.963601    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/bc5cf292-8082-4bc8-908e-ffb6f606e8d9-data\") pod \"helper-pod-delete-pvc-d640ae38-7d8c-48d1-83d2-821c86fec4ed\" (UID: \"bc5cf292-8082-4bc8-908e-ffb6f606e8d9\") " pod="local-path-storage/helper-pod-delete-pvc-d640ae38-7d8c-48d1-83d2-821c86fec4ed"
	Oct 25 09:49:53 addons-184548 kubelet[1280]: E1025 09:49:53.633302    1280 status_manager.go:1018] "Failed to get status for pod" err="pods \"test-local-path\" is forbidden: User \"system:node:addons-184548\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-184548' and this object" podUID="4de165a4-67d3-48fd-8b7a-18a8715dbd9a" pod="default/test-local-path"
	Oct 25 09:49:53 addons-184548 kubelet[1280]: I1025 09:49:53.729914    1280 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4de165a4-67d3-48fd-8b7a-18a8715dbd9a" path="/var/lib/kubelet/pods/4de165a4-67d3-48fd-8b7a-18a8715dbd9a/volumes"
	Oct 25 09:49:54 addons-184548 kubelet[1280]: I1025 09:49:54.788089    1280 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/bc5cf292-8082-4bc8-908e-ffb6f606e8d9-script\") pod \"bc5cf292-8082-4bc8-908e-ffb6f606e8d9\" (UID: \"bc5cf292-8082-4bc8-908e-ffb6f606e8d9\") "
	Oct 25 09:49:54 addons-184548 kubelet[1280]: I1025 09:49:54.788166    1280 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg69c\" (UniqueName: \"kubernetes.io/projected/bc5cf292-8082-4bc8-908e-ffb6f606e8d9-kube-api-access-tg69c\") pod \"bc5cf292-8082-4bc8-908e-ffb6f606e8d9\" (UID: \"bc5cf292-8082-4bc8-908e-ffb6f606e8d9\") "
	Oct 25 09:49:54 addons-184548 kubelet[1280]: I1025 09:49:54.788205    1280 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/bc5cf292-8082-4bc8-908e-ffb6f606e8d9-gcp-creds\") pod \"bc5cf292-8082-4bc8-908e-ffb6f606e8d9\" (UID: \"bc5cf292-8082-4bc8-908e-ffb6f606e8d9\") "
	Oct 25 09:49:54 addons-184548 kubelet[1280]: I1025 09:49:54.788223    1280 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/bc5cf292-8082-4bc8-908e-ffb6f606e8d9-data\") pod \"bc5cf292-8082-4bc8-908e-ffb6f606e8d9\" (UID: \"bc5cf292-8082-4bc8-908e-ffb6f606e8d9\") "
	Oct 25 09:49:54 addons-184548 kubelet[1280]: I1025 09:49:54.788377    1280 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc5cf292-8082-4bc8-908e-ffb6f606e8d9-data" (OuterVolumeSpecName: "data") pod "bc5cf292-8082-4bc8-908e-ffb6f606e8d9" (UID: "bc5cf292-8082-4bc8-908e-ffb6f606e8d9"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 25 09:49:54 addons-184548 kubelet[1280]: I1025 09:49:54.788693    1280 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc5cf292-8082-4bc8-908e-ffb6f606e8d9-script" (OuterVolumeSpecName: "script") pod "bc5cf292-8082-4bc8-908e-ffb6f606e8d9" (UID: "bc5cf292-8082-4bc8-908e-ffb6f606e8d9"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Oct 25 09:49:54 addons-184548 kubelet[1280]: I1025 09:49:54.788942    1280 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc5cf292-8082-4bc8-908e-ffb6f606e8d9-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "bc5cf292-8082-4bc8-908e-ffb6f606e8d9" (UID: "bc5cf292-8082-4bc8-908e-ffb6f606e8d9"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 25 09:49:54 addons-184548 kubelet[1280]: I1025 09:49:54.790788    1280 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5cf292-8082-4bc8-908e-ffb6f606e8d9-kube-api-access-tg69c" (OuterVolumeSpecName: "kube-api-access-tg69c") pod "bc5cf292-8082-4bc8-908e-ffb6f606e8d9" (UID: "bc5cf292-8082-4bc8-908e-ffb6f606e8d9"). InnerVolumeSpecName "kube-api-access-tg69c". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 25 09:49:54 addons-184548 kubelet[1280]: I1025 09:49:54.890416    1280 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/bc5cf292-8082-4bc8-908e-ffb6f606e8d9-script\") on node \"addons-184548\" DevicePath \"\""
	Oct 25 09:49:54 addons-184548 kubelet[1280]: I1025 09:49:54.890452    1280 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tg69c\" (UniqueName: \"kubernetes.io/projected/bc5cf292-8082-4bc8-908e-ffb6f606e8d9-kube-api-access-tg69c\") on node \"addons-184548\" DevicePath \"\""
	Oct 25 09:49:54 addons-184548 kubelet[1280]: I1025 09:49:54.890467    1280 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/bc5cf292-8082-4bc8-908e-ffb6f606e8d9-gcp-creds\") on node \"addons-184548\" DevicePath \"\""
	Oct 25 09:49:54 addons-184548 kubelet[1280]: I1025 09:49:54.890478    1280 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/bc5cf292-8082-4bc8-908e-ffb6f606e8d9-data\") on node \"addons-184548\" DevicePath \"\""
	
	
	==> storage-provisioner [6dea6b6abd8b243565d3414c3c5a16625b0cad5dc890e522796ba0eb33521eeb] <==
	W1025 09:49:30.063956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:49:32.067827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:49:32.073305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:49:34.076271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:49:34.080151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:49:36.083501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:49:36.087840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:49:38.090665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:49:38.096869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:49:40.100874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:49:40.115644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:49:42.162283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:49:42.179175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:49:44.182584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:49:44.187783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:49:46.191524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:49:46.196653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:49:48.203849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:49:48.209186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:49:50.213008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:49:50.224525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:49:52.227808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:49:52.232490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:49:54.235607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:49:54.243005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-184548 -n addons-184548
helpers_test.go:269: (dbg) Run:  kubectl --context addons-184548 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-bmfm4 ingress-nginx-admission-patch-cl6qb registry-creds-764b6fb674-dk8fg
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-184548 describe pod ingress-nginx-admission-create-bmfm4 ingress-nginx-admission-patch-cl6qb registry-creds-764b6fb674-dk8fg
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-184548 describe pod ingress-nginx-admission-create-bmfm4 ingress-nginx-admission-patch-cl6qb registry-creds-764b6fb674-dk8fg: exit status 1 (103.078461ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-bmfm4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-cl6qb" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-dk8fg" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-184548 describe pod ingress-nginx-admission-create-bmfm4 ingress-nginx-admission-patch-cl6qb registry-creds-764b6fb674-dk8fg: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-184548 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-184548 addons disable headlamp --alsologtostderr -v=1: exit status 11 (294.584707ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:49:56.619544  269330 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:49:56.620472  269330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:49:56.620517  269330 out.go:374] Setting ErrFile to fd 2...
	I1025 09:49:56.620537  269330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:49:56.620852  269330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 09:49:56.621266  269330 mustload.go:65] Loading cluster: addons-184548
	I1025 09:49:56.621701  269330 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:49:56.621762  269330 addons.go:606] checking whether the cluster is paused
	I1025 09:49:56.621915  269330 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:49:56.621949  269330 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:49:56.622825  269330 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:49:56.641860  269330 ssh_runner.go:195] Run: systemctl --version
	I1025 09:49:56.641940  269330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:49:56.662224  269330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:49:56.768622  269330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:49:56.768711  269330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:49:56.804305  269330 cri.go:89] found id: "99ffce70564e9c1e93fb342fe28b7db09d0ba5ccdb15aff7acaf74c0c4c835a8"
	I1025 09:49:56.804324  269330 cri.go:89] found id: "87380913bd0ee1d3b726e240b0c37928db5773727a6e395762d7457cbd38d4d2"
	I1025 09:49:56.804329  269330 cri.go:89] found id: "917acafd898799b4997b73c940a4c49b9b4b9b1ec7117130db5523c09dcb1f37"
	I1025 09:49:56.804332  269330 cri.go:89] found id: "1511789ef92ecf3b8cb712e504e6d6e07218a8ddbec83e34c275e03a4dd81a60"
	I1025 09:49:56.804336  269330 cri.go:89] found id: "fe07be820f29e285303c3f92cd4907117cf3455093a7b79460151610fb5be848"
	I1025 09:49:56.804339  269330 cri.go:89] found id: "353ca1bd958555fff3e868153751459895dcc0e6530c26255c2e4d9e9f3521c5"
	I1025 09:49:56.804342  269330 cri.go:89] found id: "5c51c82f3805654c2a671fc37a2f5c91193162d6105f71af29ec775fb4947a4b"
	I1025 09:49:56.804345  269330 cri.go:89] found id: "d4fb23cf89dec985ddc583a2f99487e957378de0e274adf63ef6bd7ff30d4fbc"
	I1025 09:49:56.804349  269330 cri.go:89] found id: "6064a19490c795de01aeb924e16a842c44be71ff72d249ea57153da7eba1f9fc"
	I1025 09:49:56.804355  269330 cri.go:89] found id: "7c76fe020b3e5b813393dcdeb41b7c271ca356c0349ac3bb6a9b60ce92ee634e"
	I1025 09:49:56.804358  269330 cri.go:89] found id: "7e5a14fca747b468e6ead675759447325ff2660970cd57b772a9fd9fb511fe97"
	I1025 09:49:56.804361  269330 cri.go:89] found id: "9e7e539bbba9851c6b21ddbda7ab05096ff12b3ea52de3fb97926bd8e4354471"
	I1025 09:49:56.804364  269330 cri.go:89] found id: "0a744f4822b1c8196d03b052451e35d3752f739e87a78edb3ff880ada3a62caa"
	I1025 09:49:56.804367  269330 cri.go:89] found id: "97501aeea889609e1bc6cb13a189bcdb0e013cc980a2ff11666a65f22b076e9f"
	I1025 09:49:56.804370  269330 cri.go:89] found id: "fb43fde6f708119239fb4fc0feffd400afbd4640cb335ed22cdbe5b461afc1f8"
	I1025 09:49:56.804377  269330 cri.go:89] found id: "01b285afd58548299d42eb8fb367c019acd34eedf67eecee676c84578ed69e6f"
	I1025 09:49:56.804381  269330 cri.go:89] found id: "6dea6b6abd8b243565d3414c3c5a16625b0cad5dc890e522796ba0eb33521eeb"
	I1025 09:49:56.804385  269330 cri.go:89] found id: "ea0a2c59127ed1de3b2cc34db8da6794752aca3f85f3ca50dc4bbad88b24d692"
	I1025 09:49:56.804389  269330 cri.go:89] found id: "7c269e9ecf5ba5e7cc6f9e493dbabf1e77ceb647f08b3a5c353d7a702b6a7ced"
	I1025 09:49:56.804391  269330 cri.go:89] found id: "a9c067b0e9c58b7946603df485733ff618356cc6b10fde0fb7d121b3320d665a"
	I1025 09:49:56.804396  269330 cri.go:89] found id: "50b3905935f0c697a82075726d26cb056a12edbb7de740fac083e74c72edde90"
	I1025 09:49:56.804399  269330 cri.go:89] found id: "fc1be8cbffe437e2dcdce4f9f5911e6c06e22826e143a64d41168e9a51ae5c09"
	I1025 09:49:56.804402  269330 cri.go:89] found id: "703663e8a09cce37f7cec1e00ab2f874bfe4b4e49042f73b80b547840b775ef2"
	I1025 09:49:56.804404  269330 cri.go:89] found id: ""
	I1025 09:49:56.804453  269330 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:49:56.825340  269330 out.go:203] 
	W1025 09:49:56.828241  269330 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:49:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:49:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:49:56.828263  269330 out.go:285] * 
	* 
	W1025 09:49:56.834556  269330 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:49:56.837505  269330 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-184548 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.72s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.37s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-wv5rr" [8107fe76-5560-4728-ab45-0cc2f5f11d00] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.009323263s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-184548 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-184548 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (354.053036ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:49:53.259744  268768 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:49:53.260443  268768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:49:53.260457  268768 out.go:374] Setting ErrFile to fd 2...
	I1025 09:49:53.260463  268768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:49:53.260810  268768 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 09:49:53.261313  268768 mustload.go:65] Loading cluster: addons-184548
	I1025 09:49:53.261885  268768 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:49:53.261917  268768 addons.go:606] checking whether the cluster is paused
	I1025 09:49:53.262240  268768 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:49:53.262263  268768 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:49:53.263009  268768 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:49:53.282752  268768 ssh_runner.go:195] Run: systemctl --version
	I1025 09:49:53.282806  268768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:49:53.318241  268768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:49:53.433754  268768 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:49:53.433907  268768 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:49:53.471839  268768 cri.go:89] found id: "99ffce70564e9c1e93fb342fe28b7db09d0ba5ccdb15aff7acaf74c0c4c835a8"
	I1025 09:49:53.471913  268768 cri.go:89] found id: "87380913bd0ee1d3b726e240b0c37928db5773727a6e395762d7457cbd38d4d2"
	I1025 09:49:53.471934  268768 cri.go:89] found id: "917acafd898799b4997b73c940a4c49b9b4b9b1ec7117130db5523c09dcb1f37"
	I1025 09:49:53.471946  268768 cri.go:89] found id: "1511789ef92ecf3b8cb712e504e6d6e07218a8ddbec83e34c275e03a4dd81a60"
	I1025 09:49:53.471952  268768 cri.go:89] found id: "fe07be820f29e285303c3f92cd4907117cf3455093a7b79460151610fb5be848"
	I1025 09:49:53.471956  268768 cri.go:89] found id: "353ca1bd958555fff3e868153751459895dcc0e6530c26255c2e4d9e9f3521c5"
	I1025 09:49:53.471960  268768 cri.go:89] found id: "5c51c82f3805654c2a671fc37a2f5c91193162d6105f71af29ec775fb4947a4b"
	I1025 09:49:53.471981  268768 cri.go:89] found id: "d4fb23cf89dec985ddc583a2f99487e957378de0e274adf63ef6bd7ff30d4fbc"
	I1025 09:49:53.471998  268768 cri.go:89] found id: "6064a19490c795de01aeb924e16a842c44be71ff72d249ea57153da7eba1f9fc"
	I1025 09:49:53.472012  268768 cri.go:89] found id: "7c76fe020b3e5b813393dcdeb41b7c271ca356c0349ac3bb6a9b60ce92ee634e"
	I1025 09:49:53.472016  268768 cri.go:89] found id: "7e5a14fca747b468e6ead675759447325ff2660970cd57b772a9fd9fb511fe97"
	I1025 09:49:53.472022  268768 cri.go:89] found id: "9e7e539bbba9851c6b21ddbda7ab05096ff12b3ea52de3fb97926bd8e4354471"
	I1025 09:49:53.472026  268768 cri.go:89] found id: "0a744f4822b1c8196d03b052451e35d3752f739e87a78edb3ff880ada3a62caa"
	I1025 09:49:53.472034  268768 cri.go:89] found id: "97501aeea889609e1bc6cb13a189bcdb0e013cc980a2ff11666a65f22b076e9f"
	I1025 09:49:53.472061  268768 cri.go:89] found id: "fb43fde6f708119239fb4fc0feffd400afbd4640cb335ed22cdbe5b461afc1f8"
	I1025 09:49:53.472083  268768 cri.go:89] found id: "01b285afd58548299d42eb8fb367c019acd34eedf67eecee676c84578ed69e6f"
	I1025 09:49:53.472092  268768 cri.go:89] found id: "6dea6b6abd8b243565d3414c3c5a16625b0cad5dc890e522796ba0eb33521eeb"
	I1025 09:49:53.472098  268768 cri.go:89] found id: "ea0a2c59127ed1de3b2cc34db8da6794752aca3f85f3ca50dc4bbad88b24d692"
	I1025 09:49:53.472101  268768 cri.go:89] found id: "7c269e9ecf5ba5e7cc6f9e493dbabf1e77ceb647f08b3a5c353d7a702b6a7ced"
	I1025 09:49:53.472104  268768 cri.go:89] found id: "a9c067b0e9c58b7946603df485733ff618356cc6b10fde0fb7d121b3320d665a"
	I1025 09:49:53.472110  268768 cri.go:89] found id: "50b3905935f0c697a82075726d26cb056a12edbb7de740fac083e74c72edde90"
	I1025 09:49:53.472113  268768 cri.go:89] found id: "fc1be8cbffe437e2dcdce4f9f5911e6c06e22826e143a64d41168e9a51ae5c09"
	I1025 09:49:53.472116  268768 cri.go:89] found id: "703663e8a09cce37f7cec1e00ab2f874bfe4b4e49042f73b80b547840b775ef2"
	I1025 09:49:53.472119  268768 cri.go:89] found id: ""
	I1025 09:49:53.472197  268768 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:49:53.492322  268768 out.go:203] 
	W1025 09:49:53.496184  268768 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:49:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:49:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:49:53.496224  268768 out.go:285] * 
	* 
	W1025 09:49:53.501424  268768 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:49:53.504580  268768 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-184548 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.37s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.42s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-184548 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-184548 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-184548 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-184548 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-184548 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-184548 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-184548 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-184548 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-184548 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [4de165a4-67d3-48fd-8b7a-18a8715dbd9a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [4de165a4-67d3-48fd-8b7a-18a8715dbd9a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [4de165a4-67d3-48fd-8b7a-18a8715dbd9a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004090186s
addons_test.go:967: (dbg) Run:  kubectl --context addons-184548 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-184548 ssh "cat /opt/local-path-provisioner/pvc-d640ae38-7d8c-48d1-83d2-821c86fec4ed_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-184548 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-184548 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-184548 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-184548 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (277.847868ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:49:52.897482  268702 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:49:52.898337  268702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:49:52.898379  268702 out.go:374] Setting ErrFile to fd 2...
	I1025 09:49:52.898402  268702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:49:52.898723  268702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 09:49:52.899098  268702 mustload.go:65] Loading cluster: addons-184548
	I1025 09:49:52.899568  268702 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:49:52.899617  268702 addons.go:606] checking whether the cluster is paused
	I1025 09:49:52.899767  268702 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:49:52.899799  268702 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:49:52.900325  268702 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:49:52.917684  268702 ssh_runner.go:195] Run: systemctl --version
	I1025 09:49:52.917743  268702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:49:52.936116  268702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:49:53.040919  268702 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:49:53.041010  268702 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:49:53.086891  268702 cri.go:89] found id: "99ffce70564e9c1e93fb342fe28b7db09d0ba5ccdb15aff7acaf74c0c4c835a8"
	I1025 09:49:53.086911  268702 cri.go:89] found id: "87380913bd0ee1d3b726e240b0c37928db5773727a6e395762d7457cbd38d4d2"
	I1025 09:49:53.086916  268702 cri.go:89] found id: "917acafd898799b4997b73c940a4c49b9b4b9b1ec7117130db5523c09dcb1f37"
	I1025 09:49:53.086920  268702 cri.go:89] found id: "1511789ef92ecf3b8cb712e504e6d6e07218a8ddbec83e34c275e03a4dd81a60"
	I1025 09:49:53.086924  268702 cri.go:89] found id: "fe07be820f29e285303c3f92cd4907117cf3455093a7b79460151610fb5be848"
	I1025 09:49:53.086929  268702 cri.go:89] found id: "353ca1bd958555fff3e868153751459895dcc0e6530c26255c2e4d9e9f3521c5"
	I1025 09:49:53.086933  268702 cri.go:89] found id: "5c51c82f3805654c2a671fc37a2f5c91193162d6105f71af29ec775fb4947a4b"
	I1025 09:49:53.086936  268702 cri.go:89] found id: "d4fb23cf89dec985ddc583a2f99487e957378de0e274adf63ef6bd7ff30d4fbc"
	I1025 09:49:53.086940  268702 cri.go:89] found id: "6064a19490c795de01aeb924e16a842c44be71ff72d249ea57153da7eba1f9fc"
	I1025 09:49:53.086946  268702 cri.go:89] found id: "7c76fe020b3e5b813393dcdeb41b7c271ca356c0349ac3bb6a9b60ce92ee634e"
	I1025 09:49:53.086949  268702 cri.go:89] found id: "7e5a14fca747b468e6ead675759447325ff2660970cd57b772a9fd9fb511fe97"
	I1025 09:49:53.086953  268702 cri.go:89] found id: "9e7e539bbba9851c6b21ddbda7ab05096ff12b3ea52de3fb97926bd8e4354471"
	I1025 09:49:53.086956  268702 cri.go:89] found id: "0a744f4822b1c8196d03b052451e35d3752f739e87a78edb3ff880ada3a62caa"
	I1025 09:49:53.086960  268702 cri.go:89] found id: "97501aeea889609e1bc6cb13a189bcdb0e013cc980a2ff11666a65f22b076e9f"
	I1025 09:49:53.086963  268702 cri.go:89] found id: "fb43fde6f708119239fb4fc0feffd400afbd4640cb335ed22cdbe5b461afc1f8"
	I1025 09:49:53.086971  268702 cri.go:89] found id: "01b285afd58548299d42eb8fb367c019acd34eedf67eecee676c84578ed69e6f"
	I1025 09:49:53.086980  268702 cri.go:89] found id: "6dea6b6abd8b243565d3414c3c5a16625b0cad5dc890e522796ba0eb33521eeb"
	I1025 09:49:53.086985  268702 cri.go:89] found id: "ea0a2c59127ed1de3b2cc34db8da6794752aca3f85f3ca50dc4bbad88b24d692"
	I1025 09:49:53.086989  268702 cri.go:89] found id: "7c269e9ecf5ba5e7cc6f9e493dbabf1e77ceb647f08b3a5c353d7a702b6a7ced"
	I1025 09:49:53.086992  268702 cri.go:89] found id: "a9c067b0e9c58b7946603df485733ff618356cc6b10fde0fb7d121b3320d665a"
	I1025 09:49:53.086997  268702 cri.go:89] found id: "50b3905935f0c697a82075726d26cb056a12edbb7de740fac083e74c72edde90"
	I1025 09:49:53.087000  268702 cri.go:89] found id: "fc1be8cbffe437e2dcdce4f9f5911e6c06e22826e143a64d41168e9a51ae5c09"
	I1025 09:49:53.087003  268702 cri.go:89] found id: "703663e8a09cce37f7cec1e00ab2f874bfe4b4e49042f73b80b547840b775ef2"
	I1025 09:49:53.087006  268702 cri.go:89] found id: ""
	I1025 09:49:53.087062  268702 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:49:53.104053  268702 out.go:203] 
	W1025 09:49:53.106495  268702 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:49:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:49:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:49:53.106537  268702 out.go:285] * 
	* 
	W1025 09:49:53.111617  268702 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:49:53.117007  268702 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-184548 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.42s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-7sktv" [d6a26aea-18d0-46d2-a809-bf7ec95759f6] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00742201s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-184548 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-184548 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (300.890648ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:49:42.464473  268249 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:49:42.465141  268249 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:49:42.465179  268249 out.go:374] Setting ErrFile to fd 2...
	I1025 09:49:42.465198  268249 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:49:42.465550  268249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 09:49:42.465934  268249 mustload.go:65] Loading cluster: addons-184548
	I1025 09:49:42.466387  268249 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:49:42.466436  268249 addons.go:606] checking whether the cluster is paused
	I1025 09:49:42.466585  268249 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:49:42.466617  268249 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:49:42.467113  268249 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:49:42.486528  268249 ssh_runner.go:195] Run: systemctl --version
	I1025 09:49:42.486594  268249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:49:42.519709  268249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:49:42.628520  268249 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:49:42.628686  268249 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:49:42.670446  268249 cri.go:89] found id: "99ffce70564e9c1e93fb342fe28b7db09d0ba5ccdb15aff7acaf74c0c4c835a8"
	I1025 09:49:42.670469  268249 cri.go:89] found id: "87380913bd0ee1d3b726e240b0c37928db5773727a6e395762d7457cbd38d4d2"
	I1025 09:49:42.670474  268249 cri.go:89] found id: "917acafd898799b4997b73c940a4c49b9b4b9b1ec7117130db5523c09dcb1f37"
	I1025 09:49:42.670489  268249 cri.go:89] found id: "1511789ef92ecf3b8cb712e504e6d6e07218a8ddbec83e34c275e03a4dd81a60"
	I1025 09:49:42.670494  268249 cri.go:89] found id: "fe07be820f29e285303c3f92cd4907117cf3455093a7b79460151610fb5be848"
	I1025 09:49:42.670497  268249 cri.go:89] found id: "353ca1bd958555fff3e868153751459895dcc0e6530c26255c2e4d9e9f3521c5"
	I1025 09:49:42.670500  268249 cri.go:89] found id: "5c51c82f3805654c2a671fc37a2f5c91193162d6105f71af29ec775fb4947a4b"
	I1025 09:49:42.670504  268249 cri.go:89] found id: "d4fb23cf89dec985ddc583a2f99487e957378de0e274adf63ef6bd7ff30d4fbc"
	I1025 09:49:42.670508  268249 cri.go:89] found id: "6064a19490c795de01aeb924e16a842c44be71ff72d249ea57153da7eba1f9fc"
	I1025 09:49:42.670517  268249 cri.go:89] found id: "7c76fe020b3e5b813393dcdeb41b7c271ca356c0349ac3bb6a9b60ce92ee634e"
	I1025 09:49:42.670524  268249 cri.go:89] found id: "7e5a14fca747b468e6ead675759447325ff2660970cd57b772a9fd9fb511fe97"
	I1025 09:49:42.670528  268249 cri.go:89] found id: "9e7e539bbba9851c6b21ddbda7ab05096ff12b3ea52de3fb97926bd8e4354471"
	I1025 09:49:42.670531  268249 cri.go:89] found id: "0a744f4822b1c8196d03b052451e35d3752f739e87a78edb3ff880ada3a62caa"
	I1025 09:49:42.670535  268249 cri.go:89] found id: "97501aeea889609e1bc6cb13a189bcdb0e013cc980a2ff11666a65f22b076e9f"
	I1025 09:49:42.670538  268249 cri.go:89] found id: "fb43fde6f708119239fb4fc0feffd400afbd4640cb335ed22cdbe5b461afc1f8"
	I1025 09:49:42.670551  268249 cri.go:89] found id: "01b285afd58548299d42eb8fb367c019acd34eedf67eecee676c84578ed69e6f"
	I1025 09:49:42.670566  268249 cri.go:89] found id: "6dea6b6abd8b243565d3414c3c5a16625b0cad5dc890e522796ba0eb33521eeb"
	I1025 09:49:42.670574  268249 cri.go:89] found id: "ea0a2c59127ed1de3b2cc34db8da6794752aca3f85f3ca50dc4bbad88b24d692"
	I1025 09:49:42.670578  268249 cri.go:89] found id: "7c269e9ecf5ba5e7cc6f9e493dbabf1e77ceb647f08b3a5c353d7a702b6a7ced"
	I1025 09:49:42.670581  268249 cri.go:89] found id: "a9c067b0e9c58b7946603df485733ff618356cc6b10fde0fb7d121b3320d665a"
	I1025 09:49:42.670586  268249 cri.go:89] found id: "50b3905935f0c697a82075726d26cb056a12edbb7de740fac083e74c72edde90"
	I1025 09:49:42.670589  268249 cri.go:89] found id: "fc1be8cbffe437e2dcdce4f9f5911e6c06e22826e143a64d41168e9a51ae5c09"
	I1025 09:49:42.670597  268249 cri.go:89] found id: "703663e8a09cce37f7cec1e00ab2f874bfe4b4e49042f73b80b547840b775ef2"
	I1025 09:49:42.670600  268249 cri.go:89] found id: ""
	I1025 09:49:42.670653  268249 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:49:42.686195  268249 out.go:203] 
	W1025 09:49:42.689718  268249 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:49:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:49:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:49:42.689747  268249 out.go:285] * 
	* 
	W1025 09:49:42.694900  268249 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:49:42.698415  268249 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-184548 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.31s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-kjxsl" [b53695b8-3237-4f6d-9deb-426adaf8d530] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004062666s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-184548 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-184548 addons disable yakd --alsologtostderr -v=1: exit status 11 (253.426737ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:49:37.194113  268178 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:49:37.194636  268178 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:49:37.194649  268178 out.go:374] Setting ErrFile to fd 2...
	I1025 09:49:37.194654  268178 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:49:37.194901  268178 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 09:49:37.195218  268178 mustload.go:65] Loading cluster: addons-184548
	I1025 09:49:37.195582  268178 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:49:37.195607  268178 addons.go:606] checking whether the cluster is paused
	I1025 09:49:37.195717  268178 config.go:182] Loaded profile config "addons-184548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:49:37.195733  268178 host.go:66] Checking if "addons-184548" exists ...
	I1025 09:49:37.196217  268178 cli_runner.go:164] Run: docker container inspect addons-184548 --format={{.State.Status}}
	I1025 09:49:37.213427  268178 ssh_runner.go:195] Run: systemctl --version
	I1025 09:49:37.213490  268178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-184548
	I1025 09:49:37.231667  268178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/addons-184548/id_rsa Username:docker}
	I1025 09:49:37.337213  268178 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:49:37.337305  268178 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:49:37.367262  268178 cri.go:89] found id: "99ffce70564e9c1e93fb342fe28b7db09d0ba5ccdb15aff7acaf74c0c4c835a8"
	I1025 09:49:37.367291  268178 cri.go:89] found id: "87380913bd0ee1d3b726e240b0c37928db5773727a6e395762d7457cbd38d4d2"
	I1025 09:49:37.367296  268178 cri.go:89] found id: "917acafd898799b4997b73c940a4c49b9b4b9b1ec7117130db5523c09dcb1f37"
	I1025 09:49:37.367301  268178 cri.go:89] found id: "1511789ef92ecf3b8cb712e504e6d6e07218a8ddbec83e34c275e03a4dd81a60"
	I1025 09:49:37.367305  268178 cri.go:89] found id: "fe07be820f29e285303c3f92cd4907117cf3455093a7b79460151610fb5be848"
	I1025 09:49:37.367308  268178 cri.go:89] found id: "353ca1bd958555fff3e868153751459895dcc0e6530c26255c2e4d9e9f3521c5"
	I1025 09:49:37.367311  268178 cri.go:89] found id: "5c51c82f3805654c2a671fc37a2f5c91193162d6105f71af29ec775fb4947a4b"
	I1025 09:49:37.367314  268178 cri.go:89] found id: "d4fb23cf89dec985ddc583a2f99487e957378de0e274adf63ef6bd7ff30d4fbc"
	I1025 09:49:37.367319  268178 cri.go:89] found id: "6064a19490c795de01aeb924e16a842c44be71ff72d249ea57153da7eba1f9fc"
	I1025 09:49:37.367332  268178 cri.go:89] found id: "7c76fe020b3e5b813393dcdeb41b7c271ca356c0349ac3bb6a9b60ce92ee634e"
	I1025 09:49:37.367339  268178 cri.go:89] found id: "7e5a14fca747b468e6ead675759447325ff2660970cd57b772a9fd9fb511fe97"
	I1025 09:49:37.367343  268178 cri.go:89] found id: "9e7e539bbba9851c6b21ddbda7ab05096ff12b3ea52de3fb97926bd8e4354471"
	I1025 09:49:37.367347  268178 cri.go:89] found id: "0a744f4822b1c8196d03b052451e35d3752f739e87a78edb3ff880ada3a62caa"
	I1025 09:49:37.367350  268178 cri.go:89] found id: "97501aeea889609e1bc6cb13a189bcdb0e013cc980a2ff11666a65f22b076e9f"
	I1025 09:49:37.367353  268178 cri.go:89] found id: "fb43fde6f708119239fb4fc0feffd400afbd4640cb335ed22cdbe5b461afc1f8"
	I1025 09:49:37.367361  268178 cri.go:89] found id: "01b285afd58548299d42eb8fb367c019acd34eedf67eecee676c84578ed69e6f"
	I1025 09:49:37.367371  268178 cri.go:89] found id: "6dea6b6abd8b243565d3414c3c5a16625b0cad5dc890e522796ba0eb33521eeb"
	I1025 09:49:37.367377  268178 cri.go:89] found id: "ea0a2c59127ed1de3b2cc34db8da6794752aca3f85f3ca50dc4bbad88b24d692"
	I1025 09:49:37.367381  268178 cri.go:89] found id: "7c269e9ecf5ba5e7cc6f9e493dbabf1e77ceb647f08b3a5c353d7a702b6a7ced"
	I1025 09:49:37.367384  268178 cri.go:89] found id: "a9c067b0e9c58b7946603df485733ff618356cc6b10fde0fb7d121b3320d665a"
	I1025 09:49:37.367390  268178 cri.go:89] found id: "50b3905935f0c697a82075726d26cb056a12edbb7de740fac083e74c72edde90"
	I1025 09:49:37.367393  268178 cri.go:89] found id: "fc1be8cbffe437e2dcdce4f9f5911e6c06e22826e143a64d41168e9a51ae5c09"
	I1025 09:49:37.367396  268178 cri.go:89] found id: "703663e8a09cce37f7cec1e00ab2f874bfe4b4e49042f73b80b547840b775ef2"
	I1025 09:49:37.367399  268178 cri.go:89] found id: ""
	I1025 09:49:37.367459  268178 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:49:37.381905  268178 out.go:203] 
	W1025 09:49:37.383334  268178 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:49:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:49:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:49:37.383362  268178 out.go:285] * 
	* 
	W1025 09:49:37.388341  268178 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:49:37.389947  268178 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-184548 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-558907 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-558907 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-6zlhk" [29a7f2e1-9e54-4b3c-9fc8-cdeedeb2b7f1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-558907 -n functional-558907
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-25 10:06:30.490602764 +0000 UTC m=+1226.317407514
functional_test.go:1645: (dbg) Run:  kubectl --context functional-558907 describe po hello-node-connect-7d85dfc575-6zlhk -n default
functional_test.go:1645: (dbg) kubectl --context functional-558907 describe po hello-node-connect-7d85dfc575-6zlhk -n default:
Name:             hello-node-connect-7d85dfc575-6zlhk
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-558907/192.168.49.2
Start Time:       Sat, 25 Oct 2025 09:56:29 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lc574 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lc574:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6zlhk to functional-558907
Normal   Pulling    7m8s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 9m58s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m8s (x5 over 9m58s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m49s (x21 over 9m57s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m49s (x21 over 9m57s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-558907 logs hello-node-connect-7d85dfc575-6zlhk -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-558907 logs hello-node-connect-7d85dfc575-6zlhk -n default: exit status 1 (103.897122ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-6zlhk" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-558907 logs hello-node-connect-7d85dfc575-6zlhk -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-558907 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-6zlhk
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-558907/192.168.49.2
Start Time:       Sat, 25 Oct 2025 09:56:29 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lc574 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lc574:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6zlhk to functional-558907
Normal   Pulling    7m8s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 9m58s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m8s (x5 over 9m58s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m49s (x21 over 9m57s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m49s (x21 over 9m57s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-558907 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-558907 logs -l app=hello-node-connect: exit status 1 (97.998329ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-6zlhk" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-558907 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-558907 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.98.79.46
IPs:                      10.98.79.46
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31920/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-558907
helpers_test.go:243: (dbg) docker inspect functional-558907:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b20d89f21156d0f3f725c384c6f8a66eeca2258218f10dbf652204560bdc17fa",
	        "Created": "2025-10-25T09:53:47.942645251Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 276846,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:53:48.001216424Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/b20d89f21156d0f3f725c384c6f8a66eeca2258218f10dbf652204560bdc17fa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b20d89f21156d0f3f725c384c6f8a66eeca2258218f10dbf652204560bdc17fa/hostname",
	        "HostsPath": "/var/lib/docker/containers/b20d89f21156d0f3f725c384c6f8a66eeca2258218f10dbf652204560bdc17fa/hosts",
	        "LogPath": "/var/lib/docker/containers/b20d89f21156d0f3f725c384c6f8a66eeca2258218f10dbf652204560bdc17fa/b20d89f21156d0f3f725c384c6f8a66eeca2258218f10dbf652204560bdc17fa-json.log",
	        "Name": "/functional-558907",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-558907:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-558907",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b20d89f21156d0f3f725c384c6f8a66eeca2258218f10dbf652204560bdc17fa",
	                "LowerDir": "/var/lib/docker/overlay2/476d534a4340547f8cb6668ca4f3cd00efb4e4b69e0992dbc36867600f056eb8-init/diff:/var/lib/docker/overlay2/a746402fafa86965bd9394d930bb96ec16982037f9f5e7f4f1de6d09576ff851/diff",
	                "MergedDir": "/var/lib/docker/overlay2/476d534a4340547f8cb6668ca4f3cd00efb4e4b69e0992dbc36867600f056eb8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/476d534a4340547f8cb6668ca4f3cd00efb4e4b69e0992dbc36867600f056eb8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/476d534a4340547f8cb6668ca4f3cd00efb4e4b69e0992dbc36867600f056eb8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-558907",
	                "Source": "/var/lib/docker/volumes/functional-558907/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-558907",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-558907",
	                "name.minikube.sigs.k8s.io": "functional-558907",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c104efb9650a54e6b419291950b8a88f655eeecab41c44dc275a3fc1dbda1d58",
	            "SandboxKey": "/var/run/docker/netns/c104efb9650a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-558907": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:e1:8c:5c:da:74",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a6fe1214b22f9f71f2966ff7ccd8ebddc71b128575c0505db53ca06c50a34701",
	                    "EndpointID": "2f02317eab776341d50bdd6b8709c8abd8b04819ebd79b2ac4eb1418a013258d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-558907",
	                        "b20d89f21156"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-558907 -n functional-558907
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-558907 logs -n 25: (1.434142342s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-558907 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                   │ functional-558907 │ jenkins │ v1.37.0 │ 25 Oct 25 09:55 UTC │ 25 Oct 25 09:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 25 Oct 25 09:55 UTC │ 25 Oct 25 09:55 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                       │ minikube          │ jenkins │ v1.37.0 │ 25 Oct 25 09:55 UTC │ 25 Oct 25 09:55 UTC │
	│ kubectl │ functional-558907 kubectl -- --context functional-558907 get pods                                                         │ functional-558907 │ jenkins │ v1.37.0 │ 25 Oct 25 09:55 UTC │ 25 Oct 25 09:55 UTC │
	│ start   │ -p functional-558907 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                  │ functional-558907 │ jenkins │ v1.37.0 │ 25 Oct 25 09:55 UTC │ 25 Oct 25 09:56 UTC │
	│ service │ invalid-svc -p functional-558907                                                                                          │ functional-558907 │ jenkins │ v1.37.0 │ 25 Oct 25 09:56 UTC │                     │
	│ cp      │ functional-558907 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                        │ functional-558907 │ jenkins │ v1.37.0 │ 25 Oct 25 09:56 UTC │ 25 Oct 25 09:56 UTC │
	│ config  │ functional-558907 config unset cpus                                                                                       │ functional-558907 │ jenkins │ v1.37.0 │ 25 Oct 25 09:56 UTC │ 25 Oct 25 09:56 UTC │
	│ config  │ functional-558907 config get cpus                                                                                         │ functional-558907 │ jenkins │ v1.37.0 │ 25 Oct 25 09:56 UTC │                     │
	│ config  │ functional-558907 config set cpus 2                                                                                       │ functional-558907 │ jenkins │ v1.37.0 │ 25 Oct 25 09:56 UTC │ 25 Oct 25 09:56 UTC │
	│ config  │ functional-558907 config get cpus                                                                                         │ functional-558907 │ jenkins │ v1.37.0 │ 25 Oct 25 09:56 UTC │ 25 Oct 25 09:56 UTC │
	│ config  │ functional-558907 config unset cpus                                                                                       │ functional-558907 │ jenkins │ v1.37.0 │ 25 Oct 25 09:56 UTC │ 25 Oct 25 09:56 UTC │
	│ ssh     │ functional-558907 ssh -n functional-558907 sudo cat /home/docker/cp-test.txt                                              │ functional-558907 │ jenkins │ v1.37.0 │ 25 Oct 25 09:56 UTC │ 25 Oct 25 09:56 UTC │
	│ config  │ functional-558907 config get cpus                                                                                         │ functional-558907 │ jenkins │ v1.37.0 │ 25 Oct 25 09:56 UTC │                     │
	│ ssh     │ functional-558907 ssh echo hello                                                                                          │ functional-558907 │ jenkins │ v1.37.0 │ 25 Oct 25 09:56 UTC │ 25 Oct 25 09:56 UTC │
	│ cp      │ functional-558907 cp functional-558907:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd596004806/001/cp-test.txt │ functional-558907 │ jenkins │ v1.37.0 │ 25 Oct 25 09:56 UTC │ 25 Oct 25 09:56 UTC │
	│ ssh     │ functional-558907 ssh cat /etc/hostname                                                                                   │ functional-558907 │ jenkins │ v1.37.0 │ 25 Oct 25 09:56 UTC │ 25 Oct 25 09:56 UTC │
	│ ssh     │ functional-558907 ssh -n functional-558907 sudo cat /home/docker/cp-test.txt                                              │ functional-558907 │ jenkins │ v1.37.0 │ 25 Oct 25 09:56 UTC │ 25 Oct 25 09:56 UTC │
	│ tunnel  │ functional-558907 tunnel --alsologtostderr                                                                                │ functional-558907 │ jenkins │ v1.37.0 │ 25 Oct 25 09:56 UTC │                     │
	│ tunnel  │ functional-558907 tunnel --alsologtostderr                                                                                │ functional-558907 │ jenkins │ v1.37.0 │ 25 Oct 25 09:56 UTC │                     │
	│ cp      │ functional-558907 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                 │ functional-558907 │ jenkins │ v1.37.0 │ 25 Oct 25 09:56 UTC │ 25 Oct 25 09:56 UTC │
	│ ssh     │ functional-558907 ssh -n functional-558907 sudo cat /tmp/does/not/exist/cp-test.txt                                       │ functional-558907 │ jenkins │ v1.37.0 │ 25 Oct 25 09:56 UTC │ 25 Oct 25 09:56 UTC │
	│ tunnel  │ functional-558907 tunnel --alsologtostderr                                                                                │ functional-558907 │ jenkins │ v1.37.0 │ 25 Oct 25 09:56 UTC │                     │
	│ addons  │ functional-558907 addons list                                                                                             │ functional-558907 │ jenkins │ v1.37.0 │ 25 Oct 25 09:56 UTC │ 25 Oct 25 09:56 UTC │
	│ addons  │ functional-558907 addons list -o json                                                                                     │ functional-558907 │ jenkins │ v1.37.0 │ 25 Oct 25 09:56 UTC │ 25 Oct 25 09:56 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:55:39
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:55:39.091852  281024 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:55:39.091980  281024 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:55:39.091984  281024 out.go:374] Setting ErrFile to fd 2...
	I1025 09:55:39.091988  281024 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:55:39.092293  281024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 09:55:39.092715  281024 out.go:368] Setting JSON to false
	I1025 09:55:39.093759  281024 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5890,"bootTime":1761380249,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 09:55:39.093820  281024 start.go:141] virtualization:  
	I1025 09:55:39.097557  281024 out.go:179] * [functional-558907] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:55:39.100770  281024 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 09:55:39.100841  281024 notify.go:220] Checking for updates...
	I1025 09:55:39.104849  281024 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:55:39.107788  281024 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 09:55:39.110655  281024 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 09:55:39.113527  281024 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 09:55:39.116330  281024 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:55:39.119847  281024 config.go:182] Loaded profile config "functional-558907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:55:39.119946  281024 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:55:39.154365  281024 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:55:39.154476  281024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:55:39.222439  281024 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-25 09:55:39.213230647 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:55:39.222534  281024 docker.go:318] overlay module found
	I1025 09:55:39.225609  281024 out.go:179] * Using the docker driver based on existing profile
	I1025 09:55:39.228434  281024 start.go:305] selected driver: docker
	I1025 09:55:39.228444  281024 start.go:925] validating driver "docker" against &{Name:functional-558907 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-558907 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:55:39.228554  281024 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:55:39.228673  281024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:55:39.283834  281024 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-25 09:55:39.27485058 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:55:39.284266  281024 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:55:39.284289  281024 cni.go:84] Creating CNI manager for ""
	I1025 09:55:39.284345  281024 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:55:39.284386  281024 start.go:349] cluster config:
	{Name:functional-558907 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-558907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:55:39.287614  281024 out.go:179] * Starting "functional-558907" primary control-plane node in "functional-558907" cluster
	I1025 09:55:39.290451  281024 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:55:39.293278  281024 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:55:39.296121  281024 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:55:39.296168  281024 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 09:55:39.296176  281024 cache.go:58] Caching tarball of preloaded images
	I1025 09:55:39.296209  281024 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:55:39.296270  281024 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 09:55:39.296279  281024 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:55:39.296396  281024 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/config.json ...
	I1025 09:55:39.314862  281024 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:55:39.314874  281024 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:55:39.314885  281024 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:55:39.314917  281024 start.go:360] acquireMachinesLock for functional-558907: {Name:mkc1d03789fbf19b3a26cdce527bb71a60e5d5e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:55:39.314971  281024 start.go:364] duration metric: took 38.589µs to acquireMachinesLock for "functional-558907"
	I1025 09:55:39.314990  281024 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:55:39.314995  281024 fix.go:54] fixHost starting: 
	I1025 09:55:39.315260  281024 cli_runner.go:164] Run: docker container inspect functional-558907 --format={{.State.Status}}
	I1025 09:55:39.331773  281024 fix.go:112] recreateIfNeeded on functional-558907: state=Running err=<nil>
	W1025 09:55:39.331794  281024 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 09:55:39.334887  281024 out.go:252] * Updating the running docker "functional-558907" container ...
	I1025 09:55:39.334910  281024 machine.go:93] provisionDockerMachine start ...
	I1025 09:55:39.335000  281024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558907
	I1025 09:55:39.351731  281024 main.go:141] libmachine: Using SSH client type: native
	I1025 09:55:39.352034  281024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1025 09:55:39.352041  281024 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:55:39.501592  281024 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-558907
	
	I1025 09:55:39.501606  281024 ubuntu.go:182] provisioning hostname "functional-558907"
	I1025 09:55:39.501669  281024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558907
	I1025 09:55:39.520400  281024 main.go:141] libmachine: Using SSH client type: native
	I1025 09:55:39.520698  281024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1025 09:55:39.520707  281024 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-558907 && echo "functional-558907" | sudo tee /etc/hostname
	I1025 09:55:39.679796  281024 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-558907
	
	I1025 09:55:39.679865  281024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558907
	I1025 09:55:39.698663  281024 main.go:141] libmachine: Using SSH client type: native
	I1025 09:55:39.698967  281024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1025 09:55:39.698981  281024 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-558907' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-558907/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-558907' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:55:39.850487  281024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:55:39.850505  281024 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 09:55:39.850526  281024 ubuntu.go:190] setting up certificates
	I1025 09:55:39.850538  281024 provision.go:84] configureAuth start
	I1025 09:55:39.850611  281024 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-558907
	I1025 09:55:39.869181  281024 provision.go:143] copyHostCerts
	I1025 09:55:39.869242  281024 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 09:55:39.869258  281024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 09:55:39.869332  281024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 09:55:39.869430  281024 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 09:55:39.869434  281024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 09:55:39.869459  281024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 09:55:39.869555  281024 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 09:55:39.869559  281024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 09:55:39.869582  281024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 09:55:39.869638  281024 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.functional-558907 san=[127.0.0.1 192.168.49.2 functional-558907 localhost minikube]
	I1025 09:55:39.974749  281024 provision.go:177] copyRemoteCerts
	I1025 09:55:39.974805  281024 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:55:39.974843  281024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558907
	I1025 09:55:39.991891  281024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/functional-558907/id_rsa Username:docker}
	I1025 09:55:40.119380  281024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 09:55:40.138098  281024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 09:55:40.156558  281024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:55:40.175119  281024 provision.go:87] duration metric: took 324.558205ms to configureAuth
	I1025 09:55:40.175137  281024 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:55:40.175338  281024 config.go:182] Loaded profile config "functional-558907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:55:40.175437  281024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558907
	I1025 09:55:40.193498  281024 main.go:141] libmachine: Using SSH client type: native
	I1025 09:55:40.193798  281024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1025 09:55:40.193810  281024 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:55:45.607132  281024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:55:45.607143  281024 machine.go:96] duration metric: took 6.27222711s to provisionDockerMachine
	I1025 09:55:45.607152  281024 start.go:293] postStartSetup for "functional-558907" (driver="docker")
	I1025 09:55:45.607161  281024 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:55:45.607221  281024 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:55:45.607270  281024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558907
	I1025 09:55:45.625672  281024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/functional-558907/id_rsa Username:docker}
	I1025 09:55:45.730550  281024 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:55:45.734178  281024 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:55:45.734199  281024 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:55:45.734208  281024 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 09:55:45.734262  281024 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 09:55:45.734349  281024 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 09:55:45.734424  281024 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/test/nested/copy/261256/hosts -> hosts in /etc/test/nested/copy/261256
	I1025 09:55:45.734467  281024 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/261256
	I1025 09:55:45.742269  281024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 09:55:45.760349  281024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/test/nested/copy/261256/hosts --> /etc/test/nested/copy/261256/hosts (40 bytes)
	I1025 09:55:45.779050  281024 start.go:296] duration metric: took 171.883538ms for postStartSetup
	I1025 09:55:45.779125  281024 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:55:45.779182  281024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558907
	I1025 09:55:45.796608  281024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/functional-558907/id_rsa Username:docker}
	I1025 09:55:45.899371  281024 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:55:45.904261  281024 fix.go:56] duration metric: took 6.589258029s for fixHost
	I1025 09:55:45.904276  281024 start.go:83] releasing machines lock for "functional-558907", held for 6.589297497s
	I1025 09:55:45.904347  281024 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-558907
	I1025 09:55:45.921930  281024 ssh_runner.go:195] Run: cat /version.json
	I1025 09:55:45.921976  281024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558907
	I1025 09:55:45.922035  281024 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:55:45.922098  281024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558907
	I1025 09:55:45.944679  281024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/functional-558907/id_rsa Username:docker}
	I1025 09:55:45.951890  281024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/functional-558907/id_rsa Username:docker}
	I1025 09:55:46.135516  281024 ssh_runner.go:195] Run: systemctl --version
	I1025 09:55:46.142217  281024 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:55:46.183371  281024 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:55:46.187928  281024 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:55:46.187992  281024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:55:46.196062  281024 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:55:46.196077  281024 start.go:495] detecting cgroup driver to use...
	I1025 09:55:46.196108  281024 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 09:55:46.196158  281024 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:55:46.211484  281024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:55:46.224833  281024 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:55:46.224887  281024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:55:46.241014  281024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:55:46.254761  281024 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:55:46.395764  281024 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:55:46.531248  281024 docker.go:234] disabling docker service ...
	I1025 09:55:46.531306  281024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:55:46.546945  281024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:55:46.560336  281024 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:55:46.704010  281024 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:55:46.844220  281024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:55:46.858879  281024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:55:46.874203  281024 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:55:46.874257  281024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:55:46.883284  281024 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:55:46.883351  281024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:55:46.892360  281024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:55:46.901251  281024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:55:46.910498  281024 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:55:46.918862  281024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:55:46.928573  281024 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:55:46.937717  281024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:55:46.947674  281024 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:55:46.955485  281024 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:55:46.962980  281024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:55:47.093930  281024 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:55:47.307570  281024 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:55:47.307627  281024 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:55:47.311387  281024 start.go:563] Will wait 60s for crictl version
	I1025 09:55:47.311456  281024 ssh_runner.go:195] Run: which crictl
	I1025 09:55:47.315015  281024 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:55:47.339283  281024 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:55:47.339365  281024 ssh_runner.go:195] Run: crio --version
	I1025 09:55:47.369659  281024 ssh_runner.go:195] Run: crio --version
	I1025 09:55:47.404371  281024 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:55:47.407249  281024 cli_runner.go:164] Run: docker network inspect functional-558907 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:55:47.423662  281024 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 09:55:47.430946  281024 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1025 09:55:47.433901  281024 kubeadm.go:883] updating cluster {Name:functional-558907 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-558907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:55:47.434045  281024 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:55:47.434113  281024 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:55:47.469576  281024 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:55:47.469588  281024 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:55:47.469644  281024 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:55:47.501553  281024 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:55:47.501566  281024 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:55:47.501576  281024 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1025 09:55:47.501685  281024 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-558907 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-558907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:55:47.501763  281024 ssh_runner.go:195] Run: crio config
	I1025 09:55:47.557735  281024 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1025 09:55:47.557765  281024 cni.go:84] Creating CNI manager for ""
	I1025 09:55:47.557776  281024 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:55:47.557788  281024 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:55:47.557817  281024 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-558907 NodeName:functional-558907 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:55:47.557958  281024 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-558907"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:55:47.558056  281024 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:55:47.567332  281024 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:55:47.567398  281024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:55:47.574984  281024 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 09:55:47.588149  281024 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:55:47.601397  281024 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1025 09:55:47.614119  281024 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:55:47.617815  281024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:55:47.760110  281024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:55:47.773432  281024 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907 for IP: 192.168.49.2
	I1025 09:55:47.773443  281024 certs.go:195] generating shared ca certs ...
	I1025 09:55:47.773457  281024 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:55:47.773592  281024 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 09:55:47.773630  281024 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 09:55:47.773635  281024 certs.go:257] generating profile certs ...
	I1025 09:55:47.773713  281024 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.key
	I1025 09:55:47.773754  281024 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/apiserver.key.4e717ac2
	I1025 09:55:47.773796  281024 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/proxy-client.key
	I1025 09:55:47.773909  281024 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 09:55:47.773936  281024 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 09:55:47.773944  281024 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 09:55:47.773966  281024 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 09:55:47.774081  281024 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:55:47.774105  281024 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 09:55:47.774153  281024 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 09:55:47.774758  281024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:55:47.794919  281024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 09:55:47.813870  281024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:55:47.831717  281024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 09:55:47.881467  281024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 09:55:47.905172  281024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:55:47.931195  281024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:55:47.956166  281024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:55:47.976374  281024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 09:55:47.996052  281024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:55:48.020652  281024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 09:55:48.041317  281024 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:55:48.054884  281024 ssh_runner.go:195] Run: openssl version
	I1025 09:55:48.061631  281024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 09:55:48.070863  281024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 09:55:48.075388  281024 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 09:55:48.075454  281024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 09:55:48.116683  281024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:55:48.124601  281024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:55:48.133015  281024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:55:48.136870  281024 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:55:48.136926  281024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:55:48.178094  281024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:55:48.186115  281024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 09:55:48.194369  281024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 09:55:48.197973  281024 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 09:55:48.198050  281024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 09:55:48.239018  281024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 09:55:48.247086  281024 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:55:48.251427  281024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:55:48.293319  281024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:55:48.333905  281024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:55:48.375113  281024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:55:48.415810  281024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:55:48.456955  281024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:55:48.500841  281024 kubeadm.go:400] StartCluster: {Name:functional-558907 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-558907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:55:48.500929  281024 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:55:48.501003  281024 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:55:48.527882  281024 cri.go:89] found id: "8c05c737458aa689b5c1d83b7aac482fd3eb0e689c694e1f0c984953515c7bf8"
	I1025 09:55:48.527894  281024 cri.go:89] found id: "62a62b4e06c0dff54dea609247eeb1fbff75bbda9423498ebf01ac5e3e02dda1"
	I1025 09:55:48.527897  281024 cri.go:89] found id: "7a6b52d410377be495e7b86e43ac8d5cac712ab6a55e939ddc630ba8ce2b4c43"
	I1025 09:55:48.527900  281024 cri.go:89] found id: "fadeee9beae5be8e4323f2f37468956d45ec8bf24eab4b3e25b38515b3c4a9ae"
	I1025 09:55:48.527902  281024 cri.go:89] found id: "5a82b264b27e5561d8f525e7ee11b984cfb1fd289b74943e7d4ba5d8f191daaf"
	I1025 09:55:48.527905  281024 cri.go:89] found id: "8bf36daee1c415d03c7cf4a6a8b445429c13535c9f31a872f174f4fe5e92e2f1"
	I1025 09:55:48.527907  281024 cri.go:89] found id: "058f88bb450f9bb3a12b91f3e4715706d02f7b950cf1bbf5a062314b291db72a"
	I1025 09:55:48.527909  281024 cri.go:89] found id: "569169d7199d048e999ea9f423c61726ebfdb9625105e6f0f38710fc7abc7203"
	I1025 09:55:48.527912  281024 cri.go:89] found id: ""
	I1025 09:55:48.527963  281024 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 09:55:48.538720  281024 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:55:48Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:55:48.538810  281024 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:55:48.546551  281024 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:55:48.546561  281024 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:55:48.546613  281024 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:55:48.553820  281024 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:55:48.554361  281024 kubeconfig.go:125] found "functional-558907" server: "https://192.168.49.2:8441"
	I1025 09:55:48.555647  281024 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:55:48.563383  281024 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-25 09:53:58.070788868 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-25 09:55:47.607746041 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1025 09:55:48.563394  281024 kubeadm.go:1160] stopping kube-system containers ...
	I1025 09:55:48.563410  281024 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1025 09:55:48.563468  281024 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:55:48.590614  281024 cri.go:89] found id: "8c05c737458aa689b5c1d83b7aac482fd3eb0e689c694e1f0c984953515c7bf8"
	I1025 09:55:48.590626  281024 cri.go:89] found id: "62a62b4e06c0dff54dea609247eeb1fbff75bbda9423498ebf01ac5e3e02dda1"
	I1025 09:55:48.590630  281024 cri.go:89] found id: "7a6b52d410377be495e7b86e43ac8d5cac712ab6a55e939ddc630ba8ce2b4c43"
	I1025 09:55:48.590633  281024 cri.go:89] found id: "fadeee9beae5be8e4323f2f37468956d45ec8bf24eab4b3e25b38515b3c4a9ae"
	I1025 09:55:48.590635  281024 cri.go:89] found id: "5a82b264b27e5561d8f525e7ee11b984cfb1fd289b74943e7d4ba5d8f191daaf"
	I1025 09:55:48.590638  281024 cri.go:89] found id: "8bf36daee1c415d03c7cf4a6a8b445429c13535c9f31a872f174f4fe5e92e2f1"
	I1025 09:55:48.590641  281024 cri.go:89] found id: "058f88bb450f9bb3a12b91f3e4715706d02f7b950cf1bbf5a062314b291db72a"
	I1025 09:55:48.590643  281024 cri.go:89] found id: "569169d7199d048e999ea9f423c61726ebfdb9625105e6f0f38710fc7abc7203"
	I1025 09:55:48.590645  281024 cri.go:89] found id: ""
	I1025 09:55:48.590649  281024 cri.go:252] Stopping containers: [8c05c737458aa689b5c1d83b7aac482fd3eb0e689c694e1f0c984953515c7bf8 62a62b4e06c0dff54dea609247eeb1fbff75bbda9423498ebf01ac5e3e02dda1 7a6b52d410377be495e7b86e43ac8d5cac712ab6a55e939ddc630ba8ce2b4c43 fadeee9beae5be8e4323f2f37468956d45ec8bf24eab4b3e25b38515b3c4a9ae 5a82b264b27e5561d8f525e7ee11b984cfb1fd289b74943e7d4ba5d8f191daaf 8bf36daee1c415d03c7cf4a6a8b445429c13535c9f31a872f174f4fe5e92e2f1 058f88bb450f9bb3a12b91f3e4715706d02f7b950cf1bbf5a062314b291db72a 569169d7199d048e999ea9f423c61726ebfdb9625105e6f0f38710fc7abc7203]
	I1025 09:55:48.590710  281024 ssh_runner.go:195] Run: which crictl
	I1025 09:55:48.594501  281024 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 8c05c737458aa689b5c1d83b7aac482fd3eb0e689c694e1f0c984953515c7bf8 62a62b4e06c0dff54dea609247eeb1fbff75bbda9423498ebf01ac5e3e02dda1 7a6b52d410377be495e7b86e43ac8d5cac712ab6a55e939ddc630ba8ce2b4c43 fadeee9beae5be8e4323f2f37468956d45ec8bf24eab4b3e25b38515b3c4a9ae 5a82b264b27e5561d8f525e7ee11b984cfb1fd289b74943e7d4ba5d8f191daaf 8bf36daee1c415d03c7cf4a6a8b445429c13535c9f31a872f174f4fe5e92e2f1 058f88bb450f9bb3a12b91f3e4715706d02f7b950cf1bbf5a062314b291db72a 569169d7199d048e999ea9f423c61726ebfdb9625105e6f0f38710fc7abc7203
	I1025 09:55:48.660604  281024 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 09:55:48.781187  281024 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:55:48.789286  281024 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct 25 09:54 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct 25 09:54 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct 25 09:54 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct 25 09:54 /etc/kubernetes/scheduler.conf
	
	I1025 09:55:48.789345  281024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1025 09:55:48.797450  281024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1025 09:55:48.805101  281024 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:55:48.805154  281024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:55:48.812557  281024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1025 09:55:48.820115  281024 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:55:48.820174  281024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:55:48.827791  281024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1025 09:55:48.835480  281024 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:55:48.835542  281024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:55:48.843018  281024 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:55:48.851074  281024 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:55:48.900455  281024 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:55:52.455912  281024 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.555431631s)
	I1025 09:55:52.455973  281024 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:55:52.680411  281024 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:55:52.745452  281024 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:55:52.823238  281024 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:55:52.823303  281024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:55:53.323886  281024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:55:53.823728  281024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:55:53.840154  281024 api_server.go:72] duration metric: took 1.016913213s to wait for apiserver process to appear ...
	I1025 09:55:53.840167  281024 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:55:53.840186  281024 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1025 09:55:57.111472  281024 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 09:55:57.111488  281024 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 09:55:57.111500  281024 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1025 09:55:57.147587  281024 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 09:55:57.147603  281024 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 09:55:57.340928  281024 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1025 09:55:57.350649  281024 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:55:57.350665  281024 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:55:57.841202  281024 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1025 09:55:57.852121  281024 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:55:57.852140  281024 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:55:58.340257  281024 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1025 09:55:58.351525  281024 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:55:58.351540  281024 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:55:58.841197  281024 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1025 09:55:58.849970  281024 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1025 09:55:58.864146  281024 api_server.go:141] control plane version: v1.34.1
	I1025 09:55:58.864162  281024 api_server.go:131] duration metric: took 5.023989543s to wait for apiserver health ...
	I1025 09:55:58.864169  281024 cni.go:84] Creating CNI manager for ""
	I1025 09:55:58.864174  281024 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:55:58.867943  281024 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:55:58.870932  281024 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:55:58.875235  281024 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 09:55:58.875247  281024 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:55:58.889267  281024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:55:59.405858  281024 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:55:59.409509  281024 system_pods.go:59] 8 kube-system pods found
	I1025 09:55:59.409531  281024 system_pods.go:61] "coredns-66bc5c9577-7svv2" [9cfd5d7e-f6c4-473f-8982-0c42b185e504] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:55:59.409538  281024 system_pods.go:61] "etcd-functional-558907" [5d06acfd-2c94-47a6-adaf-2a0740322141] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:55:59.409543  281024 system_pods.go:61] "kindnet-tfc9f" [3fcf1e71-dd7f-4e89-942c-edd4e56ffb9a] Running
	I1025 09:55:59.409550  281024 system_pods.go:61] "kube-apiserver-functional-558907" [d30b4f5c-1084-4aa3-bd8f-0d028f5de741] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:55:59.409555  281024 system_pods.go:61] "kube-controller-manager-functional-558907" [f5effc2e-cea6-4e0d-a9f9-f5fe74187bbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:55:59.409559  281024 system_pods.go:61] "kube-proxy-4gvgp" [85c3d516-9a0b-4a01-91e8-b86b86c7e184] Running
	I1025 09:55:59.409564  281024 system_pods.go:61] "kube-scheduler-functional-558907" [2084e513-bb53-4af1-bba6-264b967ca823] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:55:59.409567  281024 system_pods.go:61] "storage-provisioner" [fb41ca7d-0d8d-443a-99f3-5ca45633e048] Running
	I1025 09:55:59.409572  281024 system_pods.go:74] duration metric: took 3.704878ms to wait for pod list to return data ...
	I1025 09:55:59.409578  281024 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:55:59.412512  281024 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 09:55:59.412530  281024 node_conditions.go:123] node cpu capacity is 2
	I1025 09:55:59.412540  281024 node_conditions.go:105] duration metric: took 2.958623ms to run NodePressure ...
	I1025 09:55:59.412599  281024 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:55:59.664085  281024 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1025 09:55:59.667323  281024 kubeadm.go:743] kubelet initialised
	I1025 09:55:59.667333  281024 kubeadm.go:744] duration metric: took 3.235934ms waiting for restarted kubelet to initialise ...
	I1025 09:55:59.667347  281024 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:55:59.676904  281024 ops.go:34] apiserver oom_adj: -16
	I1025 09:55:59.676918  281024 kubeadm.go:601] duration metric: took 11.130349341s to restartPrimaryControlPlane
	I1025 09:55:59.676925  281024 kubeadm.go:402] duration metric: took 11.176095889s to StartCluster
	I1025 09:55:59.676951  281024 settings.go:142] acquiring lock: {Name:mk78071aea90144fb2e8f63b90de4792d7a3522a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:55:59.677029  281024 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 09:55:59.677690  281024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:55:59.677925  281024 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:55:59.678345  281024 config.go:182] Loaded profile config "functional-558907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:55:59.678388  281024 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:55:59.678520  281024 addons.go:69] Setting storage-provisioner=true in profile "functional-558907"
	I1025 09:55:59.678535  281024 addons.go:238] Setting addon storage-provisioner=true in "functional-558907"
	W1025 09:55:59.678540  281024 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:55:59.678562  281024 host.go:66] Checking if "functional-558907" exists ...
	I1025 09:55:59.678597  281024 addons.go:69] Setting default-storageclass=true in profile "functional-558907"
	I1025 09:55:59.678613  281024 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-558907"
	I1025 09:55:59.678941  281024 cli_runner.go:164] Run: docker container inspect functional-558907 --format={{.State.Status}}
	I1025 09:55:59.679012  281024 cli_runner.go:164] Run: docker container inspect functional-558907 --format={{.State.Status}}
	I1025 09:55:59.682482  281024 out.go:179] * Verifying Kubernetes components...
	I1025 09:55:59.686179  281024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:55:59.713947  281024 addons.go:238] Setting addon default-storageclass=true in "functional-558907"
	W1025 09:55:59.713958  281024 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:55:59.714041  281024 host.go:66] Checking if "functional-558907" exists ...
	I1025 09:55:59.714460  281024 cli_runner.go:164] Run: docker container inspect functional-558907 --format={{.State.Status}}
	I1025 09:55:59.715673  281024 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:55:59.718940  281024 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:55:59.718951  281024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:55:59.719020  281024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558907
	I1025 09:55:59.744869  281024 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:55:59.744882  281024 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:55:59.744941  281024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558907
	I1025 09:55:59.753962  281024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/functional-558907/id_rsa Username:docker}
	I1025 09:55:59.778065  281024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/functional-558907/id_rsa Username:docker}
	I1025 09:55:59.901272  281024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:55:59.922866  281024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:55:59.939878  281024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:56:01.124826  281024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.223527894s)
	I1025 09:56:01.124892  281024 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.202013459s)
	I1025 09:56:01.124912  281024 node_ready.go:35] waiting up to 6m0s for node "functional-558907" to be "Ready" ...
	I1025 09:56:01.125142  281024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.185250877s)
	I1025 09:56:01.129193  281024 node_ready.go:49] node "functional-558907" is "Ready"
	I1025 09:56:01.129211  281024 node_ready.go:38] duration metric: took 4.287192ms for node "functional-558907" to be "Ready" ...
	I1025 09:56:01.129223  281024 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:56:01.129285  281024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:56:01.138468  281024 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 09:56:01.141351  281024 addons.go:514] duration metric: took 1.462937491s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 09:56:01.147771  281024 api_server.go:72] duration metric: took 1.469806933s to wait for apiserver process to appear ...
	I1025 09:56:01.147785  281024 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:56:01.147809  281024 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1025 09:56:01.157030  281024 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1025 09:56:01.158158  281024 api_server.go:141] control plane version: v1.34.1
	I1025 09:56:01.158173  281024 api_server.go:131] duration metric: took 10.382572ms to wait for apiserver health ...
	I1025 09:56:01.158180  281024 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:56:01.161961  281024 system_pods.go:59] 8 kube-system pods found
	I1025 09:56:01.162012  281024 system_pods.go:61] "coredns-66bc5c9577-7svv2" [9cfd5d7e-f6c4-473f-8982-0c42b185e504] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:56:01.162020  281024 system_pods.go:61] "etcd-functional-558907" [5d06acfd-2c94-47a6-adaf-2a0740322141] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:56:01.162025  281024 system_pods.go:61] "kindnet-tfc9f" [3fcf1e71-dd7f-4e89-942c-edd4e56ffb9a] Running
	I1025 09:56:01.162031  281024 system_pods.go:61] "kube-apiserver-functional-558907" [d30b4f5c-1084-4aa3-bd8f-0d028f5de741] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:56:01.162037  281024 system_pods.go:61] "kube-controller-manager-functional-558907" [f5effc2e-cea6-4e0d-a9f9-f5fe74187bbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:56:01.162040  281024 system_pods.go:61] "kube-proxy-4gvgp" [85c3d516-9a0b-4a01-91e8-b86b86c7e184] Running
	I1025 09:56:01.162053  281024 system_pods.go:61] "kube-scheduler-functional-558907" [2084e513-bb53-4af1-bba6-264b967ca823] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:56:01.162056  281024 system_pods.go:61] "storage-provisioner" [fb41ca7d-0d8d-443a-99f3-5ca45633e048] Running
	I1025 09:56:01.162062  281024 system_pods.go:74] duration metric: took 3.877039ms to wait for pod list to return data ...
	I1025 09:56:01.162069  281024 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:56:01.164715  281024 default_sa.go:45] found service account: "default"
	I1025 09:56:01.164728  281024 default_sa.go:55] duration metric: took 2.654809ms for default service account to be created ...
	I1025 09:56:01.164735  281024 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:56:01.168409  281024 system_pods.go:86] 8 kube-system pods found
	I1025 09:56:01.168428  281024 system_pods.go:89] "coredns-66bc5c9577-7svv2" [9cfd5d7e-f6c4-473f-8982-0c42b185e504] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:56:01.168436  281024 system_pods.go:89] "etcd-functional-558907" [5d06acfd-2c94-47a6-adaf-2a0740322141] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:56:01.168450  281024 system_pods.go:89] "kindnet-tfc9f" [3fcf1e71-dd7f-4e89-942c-edd4e56ffb9a] Running
	I1025 09:56:01.168456  281024 system_pods.go:89] "kube-apiserver-functional-558907" [d30b4f5c-1084-4aa3-bd8f-0d028f5de741] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:56:01.168462  281024 system_pods.go:89] "kube-controller-manager-functional-558907" [f5effc2e-cea6-4e0d-a9f9-f5fe74187bbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:56:01.168465  281024 system_pods.go:89] "kube-proxy-4gvgp" [85c3d516-9a0b-4a01-91e8-b86b86c7e184] Running
	I1025 09:56:01.168470  281024 system_pods.go:89] "kube-scheduler-functional-558907" [2084e513-bb53-4af1-bba6-264b967ca823] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:56:01.168472  281024 system_pods.go:89] "storage-provisioner" [fb41ca7d-0d8d-443a-99f3-5ca45633e048] Running
	I1025 09:56:01.168478  281024 system_pods.go:126] duration metric: took 3.73801ms to wait for k8s-apps to be running ...
	I1025 09:56:01.168484  281024 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:56:01.168543  281024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:56:01.182624  281024 system_svc.go:56] duration metric: took 14.130322ms WaitForService to wait for kubelet
	I1025 09:56:01.182644  281024 kubeadm.go:586] duration metric: took 1.504685595s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:56:01.182661  281024 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:56:01.185603  281024 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 09:56:01.185619  281024 node_conditions.go:123] node cpu capacity is 2
	I1025 09:56:01.185628  281024 node_conditions.go:105] duration metric: took 2.962446ms to run NodePressure ...
	I1025 09:56:01.185639  281024 start.go:241] waiting for startup goroutines ...
	I1025 09:56:01.185645  281024 start.go:246] waiting for cluster config update ...
	I1025 09:56:01.185654  281024 start.go:255] writing updated cluster config ...
	I1025 09:56:01.186018  281024 ssh_runner.go:195] Run: rm -f paused
	I1025 09:56:01.189651  281024 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:56:01.193138  281024 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7svv2" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 09:56:03.199189  281024 pod_ready.go:104] pod "coredns-66bc5c9577-7svv2" is not "Ready", error: <nil>
	I1025 09:56:03.699052  281024 pod_ready.go:94] pod "coredns-66bc5c9577-7svv2" is "Ready"
	I1025 09:56:03.699066  281024 pod_ready.go:86] duration metric: took 2.505913158s for pod "coredns-66bc5c9577-7svv2" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:56:03.701860  281024 pod_ready.go:83] waiting for pod "etcd-functional-558907" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:56:04.707744  281024 pod_ready.go:94] pod "etcd-functional-558907" is "Ready"
	I1025 09:56:04.707757  281024 pod_ready.go:86] duration metric: took 1.005884794s for pod "etcd-functional-558907" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:56:04.710112  281024 pod_ready.go:83] waiting for pod "kube-apiserver-functional-558907" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 09:56:06.715410  281024 pod_ready.go:104] pod "kube-apiserver-functional-558907" is not "Ready", error: <nil>
	W1025 09:56:08.715987  281024 pod_ready.go:104] pod "kube-apiserver-functional-558907" is not "Ready", error: <nil>
	W1025 09:56:11.215713  281024 pod_ready.go:104] pod "kube-apiserver-functional-558907" is not "Ready", error: <nil>
	I1025 09:56:11.716377  281024 pod_ready.go:94] pod "kube-apiserver-functional-558907" is "Ready"
	I1025 09:56:11.716392  281024 pod_ready.go:86] duration metric: took 7.006266779s for pod "kube-apiserver-functional-558907" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:56:11.718946  281024 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-558907" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:56:11.723963  281024 pod_ready.go:94] pod "kube-controller-manager-functional-558907" is "Ready"
	I1025 09:56:11.723976  281024 pod_ready.go:86] duration metric: took 5.018013ms for pod "kube-controller-manager-functional-558907" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:56:11.726274  281024 pod_ready.go:83] waiting for pod "kube-proxy-4gvgp" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:56:11.730999  281024 pod_ready.go:94] pod "kube-proxy-4gvgp" is "Ready"
	I1025 09:56:11.731013  281024 pod_ready.go:86] duration metric: took 4.726925ms for pod "kube-proxy-4gvgp" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:56:11.733596  281024 pod_ready.go:83] waiting for pod "kube-scheduler-functional-558907" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:56:11.914727  281024 pod_ready.go:94] pod "kube-scheduler-functional-558907" is "Ready"
	I1025 09:56:11.914741  281024 pod_ready.go:86] duration metric: took 181.133536ms for pod "kube-scheduler-functional-558907" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:56:11.914752  281024 pod_ready.go:40] duration metric: took 10.725077449s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:56:11.968194  281024 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 09:56:11.973448  281024 out.go:179] * Done! kubectl is now configured to use "functional-558907" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 09:56:46 functional-558907 crio[3537]: time="2025-10-25T09:56:46.868105641Z" level=info msg="Checking pod default_hello-node-75c85bcc94-8xkpb for CNI network kindnet (type=ptp)"
	Oct 25 09:56:46 functional-558907 crio[3537]: time="2025-10-25T09:56:46.871294403Z" level=info msg="Ran pod sandbox ad627e4d64d0237ae742953a0e47118d486123542af13ab71ff6d97cac6d04ab with infra container: default/hello-node-75c85bcc94-8xkpb/POD" id=8e2259cc-9c62-4322-a2ef-a2ee5de5d4ce name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:56:46 functional-558907 crio[3537]: time="2025-10-25T09:56:46.875243681Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8f3fc424-2162-4b76-8325-223233aae0a1 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:56:47 functional-558907 crio[3537]: time="2025-10-25T09:56:47.825736034Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d877a85d-448e-4e66-9054-2455d0ed7378 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:56:52 functional-558907 crio[3537]: time="2025-10-25T09:56:52.781288003Z" level=info msg="Stopping pod sandbox: d4bb320c5baed806a024bb317f91897534e98e05809a4afd40e43443963e95f1" id=dddfb456-cd41-4f45-86fb-e09e8770fa23 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:56:52 functional-558907 crio[3537]: time="2025-10-25T09:56:52.781347376Z" level=info msg="Stopped pod sandbox (already stopped): d4bb320c5baed806a024bb317f91897534e98e05809a4afd40e43443963e95f1" id=dddfb456-cd41-4f45-86fb-e09e8770fa23 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:56:52 functional-558907 crio[3537]: time="2025-10-25T09:56:52.781786846Z" level=info msg="Removing pod sandbox: d4bb320c5baed806a024bb317f91897534e98e05809a4afd40e43443963e95f1" id=55beeb44-3881-4a50-8e93-856086a4e4c2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:56:52 functional-558907 crio[3537]: time="2025-10-25T09:56:52.785389692Z" level=info msg="Removed pod sandbox: d4bb320c5baed806a024bb317f91897534e98e05809a4afd40e43443963e95f1" id=55beeb44-3881-4a50-8e93-856086a4e4c2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:56:52 functional-558907 crio[3537]: time="2025-10-25T09:56:52.78601254Z" level=info msg="Stopping pod sandbox: d9b36e32149a693335193d140e558ebce19159392ffb5f446b255eddebff899d" id=472eaccf-4cb0-475a-bcd8-fc7e375516ef name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:56:52 functional-558907 crio[3537]: time="2025-10-25T09:56:52.786063502Z" level=info msg="Stopped pod sandbox (already stopped): d9b36e32149a693335193d140e558ebce19159392ffb5f446b255eddebff899d" id=472eaccf-4cb0-475a-bcd8-fc7e375516ef name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:56:52 functional-558907 crio[3537]: time="2025-10-25T09:56:52.786408556Z" level=info msg="Removing pod sandbox: d9b36e32149a693335193d140e558ebce19159392ffb5f446b255eddebff899d" id=afc1d581-1e8e-4d58-acac-4fbce274edb5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:56:52 functional-558907 crio[3537]: time="2025-10-25T09:56:52.789894691Z" level=info msg="Removed pod sandbox: d9b36e32149a693335193d140e558ebce19159392ffb5f446b255eddebff899d" id=afc1d581-1e8e-4d58-acac-4fbce274edb5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:56:52 functional-558907 crio[3537]: time="2025-10-25T09:56:52.790406219Z" level=info msg="Stopping pod sandbox: 2fce305702bd634231e1a9a45da0f7c53d72bbdb08ff6ebcfa50af5873632ab1" id=9570f62d-e432-4f01-b808-dea66053822b name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:56:52 functional-558907 crio[3537]: time="2025-10-25T09:56:52.790589064Z" level=info msg="Stopped pod sandbox (already stopped): 2fce305702bd634231e1a9a45da0f7c53d72bbdb08ff6ebcfa50af5873632ab1" id=9570f62d-e432-4f01-b808-dea66053822b name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:56:52 functional-558907 crio[3537]: time="2025-10-25T09:56:52.790918027Z" level=info msg="Removing pod sandbox: 2fce305702bd634231e1a9a45da0f7c53d72bbdb08ff6ebcfa50af5873632ab1" id=5f3287a1-38ca-4441-97e0-5f57dc88dfdd name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:56:52 functional-558907 crio[3537]: time="2025-10-25T09:56:52.794669444Z" level=info msg="Removed pod sandbox: 2fce305702bd634231e1a9a45da0f7c53d72bbdb08ff6ebcfa50af5873632ab1" id=5f3287a1-38ca-4441-97e0-5f57dc88dfdd name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:56:58 functional-558907 crio[3537]: time="2025-10-25T09:56:58.826959393Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e31450d1-305e-423d-9558-17321460e87d name=/runtime.v1.ImageService/PullImage
	Oct 25 09:57:14 functional-558907 crio[3537]: time="2025-10-25T09:57:14.826484449Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=811c6602-7a2c-4e39-94f0-cc0063b97247 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:57:24 functional-558907 crio[3537]: time="2025-10-25T09:57:24.826522153Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ce9b1574-08a0-4755-8af0-a0f0f9b83b31 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:57:55 functional-558907 crio[3537]: time="2025-10-25T09:57:55.825958519Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=152538fb-8066-4c52-8b06-32a153649707 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:58:19 functional-558907 crio[3537]: time="2025-10-25T09:58:19.825381258Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=01f3fc80-4135-4ebb-978c-850f36bf934f name=/runtime.v1.ImageService/PullImage
	Oct 25 09:59:22 functional-558907 crio[3537]: time="2025-10-25T09:59:22.828627887Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=69679e45-2bca-4b12-a809-1574a7b53d0a name=/runtime.v1.ImageService/PullImage
	Oct 25 09:59:46 functional-558907 crio[3537]: time="2025-10-25T09:59:46.826279878Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f86da628-a02c-4cfc-90c5-3f638c9c0e69 name=/runtime.v1.ImageService/PullImage
	Oct 25 10:02:06 functional-558907 crio[3537]: time="2025-10-25T10:02:06.826629186Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e50bf883-73f3-4f8d-88f4-3ae211573706 name=/runtime.v1.ImageService/PullImage
	Oct 25 10:02:41 functional-558907 crio[3537]: time="2025-10-25T10:02:41.826335042Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5a04a2ca-c207-49c4-94d3-cc497058a99c name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	956f02358aed7       docker.io/library/nginx@sha256:68e62e210589c349f01d82308b45fbd6fb9b855f8b12cb27e11ad48dbfd0e43f   9 minutes ago       Running             myfrontend                0                   b69b21f9b558a       sp-pod                                      default
	95a64beb899b9       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0   10 minutes ago      Running             nginx                     0                   edfac162c62a0       nginx-svc                                   default
	1a275944591bf       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                2                   8fc9142922e43       kube-proxy-4gvgp                            kube-system
	21c7a93695981       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   b88ec16ab55cb       kindnet-tfc9f                               kube-system
	4da774306060b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       3                   594719389caa8       storage-provisioner                         kube-system
	51f081f768c7b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   3340363a6cd40       coredns-66bc5c9577-7svv2                    kube-system
	a12437c2b5688       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   44c33142b6ca2       kube-apiserver-functional-558907            kube-system
	a7844b2a567c1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            2                   ca9584a4b16a2       kube-scheduler-functional-558907            kube-system
	fee9c19c56f17       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   2                   d06715e1d2403       kube-controller-manager-functional-558907   kube-system
	249f3e6b02897       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      2                   e36846ac1be52       etcd-functional-558907                      kube-system
	8c05c737458aa       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Exited              storage-provisioner       2                   594719389caa8       storage-provisioner                         kube-system
	62a62b4e06c0d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      1                   e36846ac1be52       etcd-functional-558907                      kube-system
	fadeee9beae5b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                1                   8fc9142922e43       kube-proxy-4gvgp                            kube-system
	5a82b264b27e5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   3340363a6cd40       coredns-66bc5c9577-7svv2                    kube-system
	8bf36daee1c41       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   1                   d06715e1d2403       kube-controller-manager-functional-558907   kube-system
	058f88bb450f9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   b88ec16ab55cb       kindnet-tfc9f                               kube-system
	569169d7199d0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            1                   ca9584a4b16a2       kube-scheduler-functional-558907            kube-system
	
	
	==> coredns [51f081f768c7b9502d4595209244c3ed0bfa70f49660202b063a093cd5fbfb8e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43268 - 13618 "HINFO IN 7964054011766946077.2990994655529862593. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022698847s
	
	
	==> coredns [5a82b264b27e5561d8f525e7ee11b984cfb1fd289b74943e7d4ba5d8f191daaf] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43387 - 28587 "HINFO IN 4123483826135059826.4354081609590360525. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015449932s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-558907
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-558907
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=functional-558907
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_54_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:54:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-558907
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:06:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:03:56 +0000   Sat, 25 Oct 2025 09:54:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:03:56 +0000   Sat, 25 Oct 2025 09:54:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:03:56 +0000   Sat, 25 Oct 2025 09:54:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:03:56 +0000   Sat, 25 Oct 2025 09:55:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-558907
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                8fefcdfb-52bb-4740-9ebb-43557d58f6ec
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-8xkpb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  default                     hello-node-connect-7d85dfc575-6zlhk          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m52s
	  kube-system                 coredns-66bc5c9577-7svv2                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-558907                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-tfc9f                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-558907             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-558907    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-4gvgp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-558907             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-558907 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-558907 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-558907 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-558907 event: Registered Node functional-558907 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-558907 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-558907 event: Registered Node functional-558907 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-558907 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-558907 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-558907 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-558907 event: Registered Node functional-558907 in Controller
	
	
	==> dmesg <==
	[Oct25 09:28] overlayfs: idmapped layers are currently not supported
	[ +37.283444] overlayfs: idmapped layers are currently not supported
	[Oct25 09:29] overlayfs: idmapped layers are currently not supported
	[ +38.328802] overlayfs: idmapped layers are currently not supported
	[Oct25 09:30] overlayfs: idmapped layers are currently not supported
	[Oct25 09:31] overlayfs: idmapped layers are currently not supported
	[Oct25 09:32] overlayfs: idmapped layers are currently not supported
	[Oct25 09:33] overlayfs: idmapped layers are currently not supported
	[Oct25 09:34] overlayfs: idmapped layers are currently not supported
	[ +28.289706] overlayfs: idmapped layers are currently not supported
	[Oct25 09:35] overlayfs: idmapped layers are currently not supported
	[Oct25 09:36] overlayfs: idmapped layers are currently not supported
	[ +24.160248] overlayfs: idmapped layers are currently not supported
	[Oct25 09:37] overlayfs: idmapped layers are currently not supported
	[  +8.216028] overlayfs: idmapped layers are currently not supported
	[Oct25 09:38] overlayfs: idmapped layers are currently not supported
	[Oct25 09:39] overlayfs: idmapped layers are currently not supported
	[Oct25 09:41] overlayfs: idmapped layers are currently not supported
	[ +14.126672] overlayfs: idmapped layers are currently not supported
	[Oct25 09:42] overlayfs: idmapped layers are currently not supported
	[Oct25 09:43] overlayfs: idmapped layers are currently not supported
	[Oct25 09:45] kauditd_printk_skb: 8 callbacks suppressed
	[Oct25 09:47] overlayfs: idmapped layers are currently not supported
	[Oct25 09:53] overlayfs: idmapped layers are currently not supported
	[Oct25 09:54] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [249f3e6b028976b6d2d7c3f24fb965cba0748f2de12fb45547541c676111015d] <==
	{"level":"warn","ts":"2025-10-25T09:55:55.592762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:55.617393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:55.658401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:55.677218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:55.701273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:55.759743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:55.823969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:55.853212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:55.876100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:55.915107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:55.940679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:56.034523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:56.055307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:56.085048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:56.113777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:56.150297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:56.169280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:56.207255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:56.246001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:56.262991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:56.292495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:56.386571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37978","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T10:05:54.318525Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1087}
	{"level":"info","ts":"2025-10-25T10:05:54.342931Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1087,"took":"24.116419ms","hash":1761972442,"current-db-size-bytes":3207168,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1327104,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-10-25T10:05:54.343000Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1761972442,"revision":1087,"compact-revision":-1}
	
	
	==> etcd [62a62b4e06c0dff54dea609247eeb1fbff75bbda9423498ebf01ac5e3e02dda1] <==
	{"level":"warn","ts":"2025-10-25T09:55:19.357554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:19.375589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:19.400154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:19.426202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:19.441935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:19.460482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:55:19.556470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46622","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T09:55:40.372438Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-25T09:55:40.372500Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-558907","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-25T09:55:40.372590Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T09:55:40.372645Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T09:55:40.650568Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-25T09:55:40.650637Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-25T09:55:40.650739Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-10-25T09:55:40.650740Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-25T09:55:40.650814Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-25T09:55:40.650724Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T09:55:40.650830Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-25T09:55:40.650837Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:55:40.650862Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-25T09:55:40.650878Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-25T09:55:40.654682Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-25T09:55:40.654781Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:55:40.654816Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-25T09:55:40.654824Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-558907","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 10:06:32 up  1:49,  0 user,  load average: 0.17, 0.43, 1.34
	Linux functional-558907 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [058f88bb450f9bb3a12b91f3e4715706d02f7b950cf1bbf5a062314b291db72a] <==
	I1025 09:55:15.765629       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:55:15.816264       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1025 09:55:15.816429       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:55:15.816442       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:55:15.816453       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:55:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:55:16.055164       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:55:16.055200       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:55:16.055212       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:55:16.061603       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:55:20.860075       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:55:20.860184       1 metrics.go:72] Registering metrics
	I1025 09:55:20.860282       1 controller.go:711] "Syncing nftables rules"
	I1025 09:55:26.034798       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:55:26.034951       1 main.go:301] handling current node
	I1025 09:55:36.034085       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:55:36.034157       1 main.go:301] handling current node
	
	
	==> kindnet [21c7a93695981cc6745aaeb3dc9105829c5843147b6872a2b03acdf7c829e447] <==
	I1025 10:04:28.617191       1 main.go:301] handling current node
	I1025 10:04:38.621690       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 10:04:38.621724       1 main.go:301] handling current node
	I1025 10:04:48.617274       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 10:04:48.617312       1 main.go:301] handling current node
	I1025 10:04:58.616239       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 10:04:58.616385       1 main.go:301] handling current node
	I1025 10:05:08.623071       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 10:05:08.623111       1 main.go:301] handling current node
	I1025 10:05:18.622836       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 10:05:18.622872       1 main.go:301] handling current node
	I1025 10:05:28.616887       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 10:05:28.616921       1 main.go:301] handling current node
	I1025 10:05:38.617375       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 10:05:38.617409       1 main.go:301] handling current node
	I1025 10:05:48.625429       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 10:05:48.625464       1 main.go:301] handling current node
	I1025 10:05:58.616348       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 10:05:58.616466       1 main.go:301] handling current node
	I1025 10:06:08.622070       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 10:06:08.622105       1 main.go:301] handling current node
	I1025 10:06:18.623828       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 10:06:18.623865       1 main.go:301] handling current node
	I1025 10:06:28.616710       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 10:06:28.616833       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a12437c2b5688a8404b106bda755e1947027972633ffd167eb3ba7ab638fbb53] <==
	I1025 09:55:57.279046       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 09:55:57.279103       1 policy_source.go:240] refreshing policies
	I1025 09:55:57.287337       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 09:55:57.290197       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 09:55:57.290436       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 09:55:57.292749       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 09:55:57.292894       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 09:55:57.292778       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 09:55:57.313321       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:55:57.322522       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:55:57.816369       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:55:58.020248       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:55:59.398467       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:55:59.521384       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:55:59.588041       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:55:59.595225       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:56:00.685532       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:56:00.721870       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:56:00.916518       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:56:15.314830       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.103.224.135"}
	I1025 09:56:21.412278       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.96.7"}
	I1025 09:56:30.117166       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.79.46"}
	E1025 09:56:46.414744       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:50520: use of closed network connection
	I1025 09:56:46.628015       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.220.192"}
	I1025 10:05:57.228279       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [8bf36daee1c415d03c7cf4a6a8b445429c13535c9f31a872f174f4fe5e92e2f1] <==
	I1025 09:55:24.065466       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:55:24.065510       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:55:24.065625       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:55:24.065767       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-558907"
	I1025 09:55:24.065836       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 09:55:24.065894       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:55:24.065975       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 09:55:24.066097       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:55:24.066358       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 09:55:24.066401       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:55:24.066503       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 09:55:24.068051       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:55:24.072886       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:55:24.072976       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:55:24.073010       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:55:24.076013       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:55:24.076148       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:55:24.078363       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:55:24.088154       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 09:55:24.096416       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 09:55:24.097645       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:55:24.103801       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 09:55:24.106107       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:55:24.114552       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 09:55:24.114853       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	
	
	==> kube-controller-manager [fee9c19c56f1707819b3dadd5364b389fb8fd1ca2198e0ef2fbdf775837b019a] <==
	I1025 09:56:00.658228       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:56:00.658368       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:56:00.662075       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:56:00.674973       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 09:56:00.676382       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:56:00.677094       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 09:56:00.683158       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 09:56:00.683303       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:56:00.693748       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 09:56:00.700814       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 09:56:00.707680       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 09:56:00.707832       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 09:56:00.707922       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:56:00.708007       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-558907"
	I1025 09:56:00.708057       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 09:56:00.708634       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 09:56:00.714153       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:56:00.714234       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 09:56:00.714411       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:56:00.718132       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 09:56:00.720730       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:56:00.736475       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:56:00.774816       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:56:00.774848       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:56:00.774865       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [1a275944591bfbf61f43cea0bcb551f730da98a02458b46a386b637b8473c2b1] <==
	I1025 09:55:58.332218       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:55:58.439871       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:55:58.541262       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:55:58.541298       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 09:55:58.541374       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:55:58.562694       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:55:58.562749       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:55:58.567369       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:55:58.567750       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:55:58.567776       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:55:58.571495       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:55:58.571588       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:55:58.571885       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:55:58.571908       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:55:58.571901       1 config.go:200] "Starting service config controller"
	I1025 09:55:58.571984       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:55:58.572046       1 config.go:309] "Starting node config controller"
	I1025 09:55:58.572075       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:55:58.671796       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:55:58.672324       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:55:58.672330       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:55:58.672355       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [fadeee9beae5be8e4323f2f37468956d45ec8bf24eab4b3e25b38515b3c4a9ae] <==
	I1025 09:55:18.962799       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:55:19.377819       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:55:20.901895       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:55:20.901936       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 09:55:20.902031       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:55:21.086883       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:55:21.087004       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:55:21.095409       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:55:21.097476       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:55:21.097500       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:55:21.100879       1 config.go:200] "Starting service config controller"
	I1025 09:55:21.100903       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:55:21.100922       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:55:21.100926       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:55:21.100951       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:55:21.100955       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:55:21.101678       1 config.go:309] "Starting node config controller"
	I1025 09:55:21.101694       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:55:21.101700       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:55:21.203664       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:55:21.203723       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:55:21.203756       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [569169d7199d048e999ea9f423c61726ebfdb9625105e6f0f38710fc7abc7203] <==
	I1025 09:55:19.950588       1 serving.go:386] Generated self-signed cert in-memory
	I1025 09:55:21.006722       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:55:21.006767       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:55:21.018216       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1025 09:55:21.018258       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1025 09:55:21.018292       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:55:21.018301       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:55:21.018317       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 09:55:21.018325       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 09:55:21.018544       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:55:21.018607       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:55:21.119600       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 09:55:21.119710       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1025 09:55:21.119831       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:55:40.361220       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1025 09:55:40.361241       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1025 09:55:40.361264       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1025 09:55:40.361302       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:55:40.361324       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1025 09:55:40.361342       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 09:55:40.361638       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1025 09:55:40.361667       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a7844b2a567c168a9fb3d5e1e1e7c968379e24872ccb194ab104ac32753c7d99] <==
	I1025 09:55:55.669953       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:55:57.014948       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:55:57.014975       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:55:57.014986       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:55:57.014992       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:55:57.230874       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:55:57.230970       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:55:57.233527       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:55:57.237560       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:55:57.237814       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:55:57.238110       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:55:57.338912       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:03:50 functional-558907 kubelet[3860]: E1025 10:03:50.825935    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8xkpb" podUID="dd94259e-9aeb-46a7-9b1e-58caf77a2fb0"
	Oct 25 10:03:54 functional-558907 kubelet[3860]: E1025 10:03:54.825846    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6zlhk" podUID="29a7f2e1-9e54-4b3c-9fc8-cdeedeb2b7f1"
	Oct 25 10:04:05 functional-558907 kubelet[3860]: E1025 10:04:05.825911    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8xkpb" podUID="dd94259e-9aeb-46a7-9b1e-58caf77a2fb0"
	Oct 25 10:04:07 functional-558907 kubelet[3860]: E1025 10:04:07.825427    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6zlhk" podUID="29a7f2e1-9e54-4b3c-9fc8-cdeedeb2b7f1"
	Oct 25 10:04:20 functional-558907 kubelet[3860]: E1025 10:04:20.826304    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8xkpb" podUID="dd94259e-9aeb-46a7-9b1e-58caf77a2fb0"
	Oct 25 10:04:20 functional-558907 kubelet[3860]: E1025 10:04:20.826966    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6zlhk" podUID="29a7f2e1-9e54-4b3c-9fc8-cdeedeb2b7f1"
	Oct 25 10:04:33 functional-558907 kubelet[3860]: E1025 10:04:33.825183    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6zlhk" podUID="29a7f2e1-9e54-4b3c-9fc8-cdeedeb2b7f1"
	Oct 25 10:04:34 functional-558907 kubelet[3860]: E1025 10:04:34.825495    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8xkpb" podUID="dd94259e-9aeb-46a7-9b1e-58caf77a2fb0"
	Oct 25 10:04:47 functional-558907 kubelet[3860]: E1025 10:04:47.825676    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6zlhk" podUID="29a7f2e1-9e54-4b3c-9fc8-cdeedeb2b7f1"
	Oct 25 10:04:48 functional-558907 kubelet[3860]: E1025 10:04:48.826436    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8xkpb" podUID="dd94259e-9aeb-46a7-9b1e-58caf77a2fb0"
	Oct 25 10:04:58 functional-558907 kubelet[3860]: E1025 10:04:58.825819    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6zlhk" podUID="29a7f2e1-9e54-4b3c-9fc8-cdeedeb2b7f1"
	Oct 25 10:05:01 functional-558907 kubelet[3860]: E1025 10:05:01.825902    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8xkpb" podUID="dd94259e-9aeb-46a7-9b1e-58caf77a2fb0"
	Oct 25 10:05:10 functional-558907 kubelet[3860]: E1025 10:05:10.825805    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6zlhk" podUID="29a7f2e1-9e54-4b3c-9fc8-cdeedeb2b7f1"
	Oct 25 10:05:14 functional-558907 kubelet[3860]: E1025 10:05:14.826130    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8xkpb" podUID="dd94259e-9aeb-46a7-9b1e-58caf77a2fb0"
	Oct 25 10:05:24 functional-558907 kubelet[3860]: E1025 10:05:24.825729    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6zlhk" podUID="29a7f2e1-9e54-4b3c-9fc8-cdeedeb2b7f1"
	Oct 25 10:05:26 functional-558907 kubelet[3860]: E1025 10:05:26.825862    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8xkpb" podUID="dd94259e-9aeb-46a7-9b1e-58caf77a2fb0"
	Oct 25 10:05:35 functional-558907 kubelet[3860]: E1025 10:05:35.825660    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6zlhk" podUID="29a7f2e1-9e54-4b3c-9fc8-cdeedeb2b7f1"
	Oct 25 10:05:40 functional-558907 kubelet[3860]: E1025 10:05:40.826232    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8xkpb" podUID="dd94259e-9aeb-46a7-9b1e-58caf77a2fb0"
	Oct 25 10:05:46 functional-558907 kubelet[3860]: E1025 10:05:46.825972    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6zlhk" podUID="29a7f2e1-9e54-4b3c-9fc8-cdeedeb2b7f1"
	Oct 25 10:05:55 functional-558907 kubelet[3860]: E1025 10:05:55.825930    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8xkpb" podUID="dd94259e-9aeb-46a7-9b1e-58caf77a2fb0"
	Oct 25 10:06:01 functional-558907 kubelet[3860]: E1025 10:06:01.825198    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6zlhk" podUID="29a7f2e1-9e54-4b3c-9fc8-cdeedeb2b7f1"
	Oct 25 10:06:07 functional-558907 kubelet[3860]: E1025 10:06:07.825768    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8xkpb" podUID="dd94259e-9aeb-46a7-9b1e-58caf77a2fb0"
	Oct 25 10:06:16 functional-558907 kubelet[3860]: E1025 10:06:16.826046    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6zlhk" podUID="29a7f2e1-9e54-4b3c-9fc8-cdeedeb2b7f1"
	Oct 25 10:06:22 functional-558907 kubelet[3860]: E1025 10:06:22.827821    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8xkpb" podUID="dd94259e-9aeb-46a7-9b1e-58caf77a2fb0"
	Oct 25 10:06:29 functional-558907 kubelet[3860]: E1025 10:06:29.825744    3860 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-6zlhk" podUID="29a7f2e1-9e54-4b3c-9fc8-cdeedeb2b7f1"
	
	
	==> storage-provisioner [4da774306060b11bd2a0cf42fcfa9c5ee9ea029498344375531bd1795504a7c7] <==
	W1025 10:06:08.604777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:06:10.608350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:06:10.613383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:06:12.616889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:06:12.623994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:06:14.626996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:06:14.631986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:06:16.634644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:06:16.638833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:06:18.641399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:06:18.648137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:06:20.651551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:06:20.656330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:06:22.659402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:06:22.666129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:06:24.668943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:06:24.673683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:06:26.676512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:06:26.681004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:06:28.684385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:06:28.688878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:06:30.693050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:06:30.698383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:06:32.702616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:06:32.710335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [8c05c737458aa689b5c1d83b7aac482fd3eb0e689c694e1f0c984953515c7bf8] <==
	I1025 09:55:36.346361       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:55:36.366668       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:55:36.366795       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:55:36.371101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:55:39.827131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-558907 -n functional-558907
helpers_test.go:269: (dbg) Run:  kubectl --context functional-558907 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-8xkpb hello-node-connect-7d85dfc575-6zlhk
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-558907 describe pod hello-node-75c85bcc94-8xkpb hello-node-connect-7d85dfc575-6zlhk
helpers_test.go:290: (dbg) kubectl --context functional-558907 describe pod hello-node-75c85bcc94-8xkpb hello-node-connect-7d85dfc575-6zlhk:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-8xkpb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-558907/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 09:56:46 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lrv6d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lrv6d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m47s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-8xkpb to functional-558907
	  Normal   Pulling    6m47s (x5 over 9m47s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m47s (x5 over 9m47s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m47s (x5 over 9m47s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m35s (x21 over 9m46s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m35s (x21 over 9m46s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-6zlhk
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-558907/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 09:56:29 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lc574 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lc574:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-6zlhk to functional-558907
	  Normal   Pulling    7m11s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m11s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m11s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m52s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m52s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-558907 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-558907 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-8xkpb" [dd94259e-9aeb-46a7-9b1e-58caf77a2fb0] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1025 09:57:05.123967  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:59:21.255546  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:59:48.966162  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:04:21.255765  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-558907 -n functional-558907
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-25 10:06:47.090380836 +0000 UTC m=+1242.917185512
functional_test.go:1460: (dbg) Run:  kubectl --context functional-558907 describe po hello-node-75c85bcc94-8xkpb -n default
functional_test.go:1460: (dbg) kubectl --context functional-558907 describe po hello-node-75c85bcc94-8xkpb -n default:
Name:             hello-node-75c85bcc94-8xkpb
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-558907/192.168.49.2
Start Time:       Sat, 25 Oct 2025 09:56:46 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lrv6d (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lrv6d:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-8xkpb to functional-558907
Normal   Pulling    7m1s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m1s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m1s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m49s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m49s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-558907 logs hello-node-75c85bcc94-8xkpb -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-558907 logs hello-node-75c85bcc94-8xkpb -n default: exit status 1 (126.650606ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-8xkpb" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-558907 logs hello-node-75c85bcc94-8xkpb -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-558907 service --namespace=default --https --url hello-node: exit status 115 (476.455625ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32068
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-558907 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-558907 service hello-node --url --format={{.IP}}: exit status 115 (654.606873ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-558907 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-558907 service hello-node --url: exit status 115 (582.040837ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32068
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-558907 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32068
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 image load --daemon kicbase/echo-server:functional-558907 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-558907 image load --daemon kicbase/echo-server:functional-558907 --alsologtostderr: (2.041100861s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-558907" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 image load --daemon kicbase/echo-server:functional-558907 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-558907" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-558907
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 image load --daemon kicbase/echo-server:functional-558907 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-558907" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 image save kicbase/echo-server:functional-558907 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1025 10:07:01.447235  288828 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:07:01.448008  288828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:07:01.448048  288828 out.go:374] Setting ErrFile to fd 2...
	I1025 10:07:01.448068  288828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:07:01.448433  288828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:07:01.449150  288828 config.go:182] Loaded profile config "functional-558907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:07:01.449352  288828 config.go:182] Loaded profile config "functional-558907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:07:01.449919  288828 cli_runner.go:164] Run: docker container inspect functional-558907 --format={{.State.Status}}
	I1025 10:07:01.471606  288828 ssh_runner.go:195] Run: systemctl --version
	I1025 10:07:01.471671  288828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558907
	I1025 10:07:01.489486  288828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/functional-558907/id_rsa Username:docker}
	I1025 10:07:01.592800  288828 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1025 10:07:01.592864  288828 cache_images.go:254] Failed to load cached images for "functional-558907": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1025 10:07:01.592881  288828 cache_images.go:266] failed pushing to: functional-558907

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-558907
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 image save --daemon kicbase/echo-server:functional-558907 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-558907
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-558907: exit status 1 (19.502471ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-558907

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-558907

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (535.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-480889 stop --alsologtostderr -v 5: (27.732079538s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 start --wait true --alsologtostderr -v 5
E1025 10:14:04.811196  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:14:21.254993  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:16:20.951356  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:16:48.653364  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:19:21.255638  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:21:20.950954  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-480889 start --wait true --alsologtostderr -v 5: exit status 80 (8m24.630423924s)

                                                
                                                
-- stdout --
	* [ha-480889] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-480889" primary control-plane node in "ha-480889" cluster
	* Pulling base image v0.0.48-1760939008-21773 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Enabled addons: 
	
	* Starting "ha-480889-m02" control-plane node in "ha-480889" cluster
	* Pulling base image v0.0.48-1760939008-21773 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-480889-m03" control-plane node in "ha-480889" cluster
	* Pulling base image v0.0.48-1760939008-21773 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:13:21.133168  308083 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:13:21.133290  308083 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:13:21.133303  308083 out.go:374] Setting ErrFile to fd 2...
	I1025 10:13:21.133309  308083 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:13:21.133562  308083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:13:21.133919  308083 out.go:368] Setting JSON to false
	I1025 10:13:21.134805  308083 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6953,"bootTime":1761380249,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 10:13:21.134877  308083 start.go:141] virtualization:  
	I1025 10:13:21.140316  308083 out.go:179] * [ha-480889] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:13:21.143327  308083 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:13:21.143404  308083 notify.go:220] Checking for updates...
	I1025 10:13:21.149301  308083 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:13:21.152089  308083 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:13:21.154925  308083 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 10:13:21.157773  308083 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:13:21.160618  308083 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:13:21.164113  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:21.164223  308083 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:13:21.197583  308083 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:13:21.197765  308083 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:13:21.253016  308083 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-25 10:13:21.243524818 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:13:21.253128  308083 docker.go:318] overlay module found
	I1025 10:13:21.256213  308083 out.go:179] * Using the docker driver based on existing profile
	I1025 10:13:21.259079  308083 start.go:305] selected driver: docker
	I1025 10:13:21.259120  308083 start.go:925] validating driver "docker" against &{Name:ha-480889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:13:21.259253  308083 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:13:21.259348  308083 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:13:21.316248  308083 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-25 10:13:21.30638419 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:13:21.316658  308083 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:13:21.316688  308083 cni.go:84] Creating CNI manager for ""
	I1025 10:13:21.316750  308083 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1025 10:13:21.316803  308083 start.go:349] cluster config:
	{Name:ha-480889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:13:21.320059  308083 out.go:179] * Starting "ha-480889" primary control-plane node in "ha-480889" cluster
	I1025 10:13:21.322881  308083 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:13:21.325849  308083 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:13:21.328624  308083 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:13:21.328676  308083 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 10:13:21.328688  308083 cache.go:58] Caching tarball of preloaded images
	I1025 10:13:21.328730  308083 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:13:21.328805  308083 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:13:21.328816  308083 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:13:21.328961  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:13:21.348972  308083 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:13:21.348996  308083 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:13:21.349014  308083 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:13:21.349046  308083 start.go:360] acquireMachinesLock for ha-480889: {Name:mk41781a5f7df8ed38323f26b29dd3de0536d841 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:13:21.349099  308083 start.go:364] duration metric: took 35.972µs to acquireMachinesLock for "ha-480889"
	I1025 10:13:21.349123  308083 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:13:21.349129  308083 fix.go:54] fixHost starting: 
	I1025 10:13:21.349386  308083 cli_runner.go:164] Run: docker container inspect ha-480889 --format={{.State.Status}}
	I1025 10:13:21.366278  308083 fix.go:112] recreateIfNeeded on ha-480889: state=Stopped err=<nil>
	W1025 10:13:21.366311  308083 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:13:21.369548  308083 out.go:252] * Restarting existing docker container for "ha-480889" ...
	I1025 10:13:21.369634  308083 cli_runner.go:164] Run: docker start ha-480889
	I1025 10:13:21.622973  308083 cli_runner.go:164] Run: docker container inspect ha-480889 --format={{.State.Status}}
	I1025 10:13:21.639685  308083 kic.go:430] container "ha-480889" state is running.
	I1025 10:13:21.640060  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889
	I1025 10:13:21.659744  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:13:21.659977  308083 machine.go:93] provisionDockerMachine start ...
	I1025 10:13:21.660037  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:21.679901  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:21.680217  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1025 10:13:21.680227  308083 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:13:21.681077  308083 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37726->127.0.0.1:33173: read: connection reset by peer
	I1025 10:13:24.829722  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-480889
	
	I1025 10:13:24.829748  308083 ubuntu.go:182] provisioning hostname "ha-480889"
	I1025 10:13:24.829819  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:24.848138  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:24.848455  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1025 10:13:24.848472  308083 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-480889 && echo "ha-480889" | sudo tee /etc/hostname
	I1025 10:13:25.012654  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-480889
	
	I1025 10:13:25.012743  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:25.031520  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:25.031847  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1025 10:13:25.031875  308083 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-480889' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-480889/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-480889' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:13:25.182388  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:13:25.182461  308083 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:13:25.182530  308083 ubuntu.go:190] setting up certificates
	I1025 10:13:25.182567  308083 provision.go:84] configureAuth start
	I1025 10:13:25.182666  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889
	I1025 10:13:25.200092  308083 provision.go:143] copyHostCerts
	I1025 10:13:25.200133  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:13:25.200165  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:13:25.200172  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:13:25.200245  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:13:25.200331  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:13:25.200352  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:13:25.200357  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:13:25.200382  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:13:25.200423  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:13:25.200438  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:13:25.200442  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:13:25.200464  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:13:25.200507  308083 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.ha-480889 san=[127.0.0.1 192.168.49.2 ha-480889 localhost minikube]
	I1025 10:13:25.925035  308083 provision.go:177] copyRemoteCerts
	I1025 10:13:25.925106  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:13:25.925148  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:25.941975  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:13:26.046168  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 10:13:26.046249  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:13:26.065892  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 10:13:26.065964  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1025 10:13:26.086519  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 10:13:26.086582  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:13:26.105106  308083 provision.go:87] duration metric: took 922.501142ms to configureAuth
	I1025 10:13:26.105133  308083 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:13:26.105365  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:26.105486  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:26.123735  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:26.124045  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1025 10:13:26.124102  308083 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:13:26.451879  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:13:26.451953  308083 machine.go:96] duration metric: took 4.791965867s to provisionDockerMachine
	I1025 10:13:26.451985  308083 start.go:293] postStartSetup for "ha-480889" (driver="docker")
	I1025 10:13:26.452035  308083 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:13:26.452145  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:13:26.452222  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:26.474611  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:13:26.586070  308083 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:13:26.589442  308083 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:13:26.589480  308083 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:13:26.589492  308083 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:13:26.589557  308083 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:13:26.589654  308083 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:13:26.589667  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> /etc/ssl/certs/2612562.pem
	I1025 10:13:26.589769  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:13:26.597470  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:13:26.615616  308083 start.go:296] duration metric: took 163.578765ms for postStartSetup
	I1025 10:13:26.615697  308083 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:13:26.615759  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:26.632968  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:13:26.735211  308083 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:13:26.740030  308083 fix.go:56] duration metric: took 5.390893179s for fixHost
	I1025 10:13:26.740056  308083 start.go:83] releasing machines lock for "ha-480889", held for 5.390944264s
	I1025 10:13:26.740127  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889
	I1025 10:13:26.756884  308083 ssh_runner.go:195] Run: cat /version.json
	I1025 10:13:26.756940  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:26.756964  308083 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:13:26.757017  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:26.775539  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:13:26.778199  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:13:26.873785  308083 ssh_runner.go:195] Run: systemctl --version
	I1025 10:13:26.965654  308083 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:13:27.005417  308083 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:13:27.010728  308083 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:13:27.010810  308083 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:13:27.019133  308083 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:13:27.019158  308083 start.go:495] detecting cgroup driver to use...
	I1025 10:13:27.019210  308083 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:13:27.019280  308083 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:13:27.034337  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:13:27.047938  308083 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:13:27.048000  308083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:13:27.063832  308083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:13:27.081381  308083 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:13:27.198834  308083 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:13:27.303413  308083 docker.go:234] disabling docker service ...
	I1025 10:13:27.303534  308083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:13:27.318254  308083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:13:27.331149  308083 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:13:27.440477  308083 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:13:27.554598  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:13:27.567225  308083 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:13:27.581183  308083 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:13:27.581264  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.590278  308083 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:13:27.590389  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.599250  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.607897  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.616848  308083 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:13:27.625132  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.634834  308083 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.643393  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.653830  308083 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:13:27.661579  308083 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:13:27.669371  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:27.781686  308083 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:13:27.909770  308083 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:13:27.909891  308083 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:13:27.913604  308083 start.go:563] Will wait 60s for crictl version
	I1025 10:13:27.913677  308083 ssh_runner.go:195] Run: which crictl
	I1025 10:13:27.917354  308083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:13:27.943799  308083 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:13:27.943944  308083 ssh_runner.go:195] Run: crio --version
	I1025 10:13:27.972380  308083 ssh_runner.go:195] Run: crio --version
	I1025 10:13:28.006726  308083 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:13:28.009638  308083 cli_runner.go:164] Run: docker network inspect ha-480889 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:13:28.029757  308083 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 10:13:28.033806  308083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:13:28.045238  308083 kubeadm.go:883] updating cluster {Name:ha-480889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:13:28.046168  308083 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:13:28.046264  308083 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:13:28.081721  308083 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:13:28.081747  308083 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:13:28.081804  308083 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:13:28.109690  308083 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:13:28.109715  308083 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:13:28.109724  308083 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1025 10:13:28.109840  308083 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-480889 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:13:28.109926  308083 ssh_runner.go:195] Run: crio config
	I1025 10:13:28.181906  308083 cni.go:84] Creating CNI manager for ""
	I1025 10:13:28.181927  308083 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1025 10:13:28.181947  308083 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:13:28.181970  308083 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-480889 NodeName:ha-480889 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:13:28.182120  308083 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-480889"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:13:28.182142  308083 kube-vip.go:115] generating kube-vip config ...
	I1025 10:13:28.182194  308083 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1025 10:13:28.194754  308083 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:13:28.194852  308083 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1025 10:13:28.194915  308083 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:13:28.202716  308083 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:13:28.202791  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1025 10:13:28.211249  308083 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1025 10:13:28.224427  308083 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:13:28.236965  308083 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1025 10:13:28.249237  308083 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1025 10:13:28.261093  308083 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1025 10:13:28.265704  308083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:13:28.275389  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:28.388284  308083 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:13:28.404560  308083 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889 for IP: 192.168.49.2
	I1025 10:13:28.404624  308083 certs.go:195] generating shared ca certs ...
	I1025 10:13:28.404659  308083 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:28.404824  308083 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:13:28.404900  308083 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:13:28.404925  308083 certs.go:257] generating profile certs ...
	I1025 10:13:28.405027  308083 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.key
	I1025 10:13:28.405078  308083 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key.a013837d
	I1025 10:13:28.405107  308083 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt.a013837d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1025 10:13:29.281974  308083 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt.a013837d ...
	I1025 10:13:29.282465  308083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt.a013837d: {Name:mk2ee9cff9ddeca542ff438d607ca92d489e621a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:29.282692  308083 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key.a013837d ...
	I1025 10:13:29.282818  308083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key.a013837d: {Name:mk666a1056a90e3af7ff477b2ecc4f82c52a5311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:29.282987  308083 certs.go:382] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt.a013837d -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt
	I1025 10:13:29.283272  308083 certs.go:386] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key.a013837d -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key
	I1025 10:13:29.283463  308083 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key
	I1025 10:13:29.283498  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 10:13:29.283530  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 10:13:29.283570  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 10:13:29.283605  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 10:13:29.283633  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1025 10:13:29.283680  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1025 10:13:29.283712  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1025 10:13:29.283743  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1025 10:13:29.283826  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:13:29.283879  308083 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:13:29.283905  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:13:29.283959  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:13:29.284007  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:13:29.284066  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:13:29.284138  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:13:29.284221  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> /usr/share/ca-certificates/2612562.pem
	I1025 10:13:29.284263  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:29.284295  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem -> /usr/share/ca-certificates/261256.pem
	I1025 10:13:29.284844  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:13:29.339963  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:13:29.378039  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:13:29.412109  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:13:29.439404  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 10:13:29.471848  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:13:29.495108  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:13:29.521223  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:13:29.555889  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:13:29.583865  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:13:29.607803  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:13:29.660341  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:13:29.687106  308083 ssh_runner.go:195] Run: openssl version
	I1025 10:13:29.696444  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:13:29.707221  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:13:29.717578  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:13:29.717659  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:13:29.790492  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:13:29.802381  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:13:29.810802  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:29.815111  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:29.815223  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:29.864875  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:13:29.872882  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:13:29.882139  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:13:29.887141  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:13:29.887254  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:13:29.933083  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:13:29.942393  308083 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:13:29.946745  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:13:29.992960  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:13:30.044394  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:13:30.092620  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:13:30.151671  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:13:30.195276  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:13:30.238904  308083 kubeadm.go:400] StartCluster: {Name:ha-480889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:13:30.239101  308083 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:13:30.239204  308083 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:13:30.304407  308083 cri.go:89] found id: "07e7673199f69cfda9e91af2a66aad345a2ce7a92130398dd12fc4e17470e088"
	I1025 10:13:30.304479  308083 cri.go:89] found id: "9e3b516f6f15caae43bda25f85832b5ad9a201e6c7b833a1ba0ec9db87f687fd"
	I1025 10:13:30.304499  308083 cri.go:89] found id: "0b2d139004d5afcec6c5e7f18831bff8c069ba521b289758825ffdd6fd892697"
	I1025 10:13:30.304523  308083 cri.go:89] found id: "322c2cc726dbd336dc6d64af52ed0d7374e34249ef33e160f4bc633c2590c50d"
	I1025 10:13:30.304554  308083 cri.go:89] found id: "170a3a9364b5079051bd3c5c594733a45ac4ddd6193638cc413453308f5c0fac"
	I1025 10:13:30.304578  308083 cri.go:89] found id: ""
	I1025 10:13:30.304661  308083 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:13:30.328956  308083 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:13:30Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:13:30.329101  308083 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:13:30.340608  308083 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:13:30.340681  308083 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:13:30.340762  308083 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:13:30.351736  308083 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:13:30.352209  308083 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-480889" does not appear in /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:13:30.352379  308083 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-259409/kubeconfig needs updating (will repair): [kubeconfig missing "ha-480889" cluster setting kubeconfig missing "ha-480889" context setting]
	I1025 10:13:30.352687  308083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:30.353275  308083 kapi.go:59] client config for ha-480889: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.key", CAFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 10:13:30.354022  308083 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1025 10:13:30.354112  308083 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1025 10:13:30.354147  308083 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 10:13:30.354173  308083 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1025 10:13:30.354194  308083 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1025 10:13:30.354220  308083 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 10:13:30.354596  308083 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:13:30.369232  308083 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1025 10:13:30.369295  308083 kubeadm.go:601] duration metric: took 28.594078ms to restartPrimaryControlPlane
	I1025 10:13:30.369334  308083 kubeadm.go:402] duration metric: took 130.438978ms to StartCluster
	I1025 10:13:30.369370  308083 settings.go:142] acquiring lock: {Name:mk78071aea90144fb2e8f63b90de4792d7a3522a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:30.369458  308083 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:13:30.370118  308083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:30.370359  308083 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:13:30.370404  308083 start.go:241] waiting for startup goroutines ...
	I1025 10:13:30.370435  308083 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:13:30.370975  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:30.376476  308083 out.go:179] * Enabled addons: 
	I1025 10:13:30.379493  308083 addons.go:514] duration metric: took 9.050073ms for enable addons: enabled=[]
	I1025 10:13:30.379556  308083 start.go:246] waiting for cluster config update ...
	I1025 10:13:30.379587  308083 start.go:255] writing updated cluster config ...
	I1025 10:13:30.382748  308083 out.go:203] 
	I1025 10:13:30.385876  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:30.386069  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:13:30.389383  308083 out.go:179] * Starting "ha-480889-m02" control-plane node in "ha-480889" cluster
	I1025 10:13:30.392170  308083 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:13:30.395076  308083 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:13:30.397919  308083 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:13:30.397962  308083 cache.go:58] Caching tarball of preloaded images
	I1025 10:13:30.398098  308083 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:13:30.398132  308083 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:13:30.398282  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:13:30.398534  308083 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:13:30.435730  308083 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:13:30.435756  308083 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:13:30.435773  308083 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:13:30.435796  308083 start.go:360] acquireMachinesLock for ha-480889-m02: {Name:mk5fa3d1d910363d3e584c1db68856801d0a168a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:13:30.435853  308083 start.go:364] duration metric: took 36.152µs to acquireMachinesLock for "ha-480889-m02"
	I1025 10:13:30.435879  308083 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:13:30.435886  308083 fix.go:54] fixHost starting: m02
	I1025 10:13:30.436144  308083 cli_runner.go:164] Run: docker container inspect ha-480889-m02 --format={{.State.Status}}
	I1025 10:13:30.486709  308083 fix.go:112] recreateIfNeeded on ha-480889-m02: state=Stopped err=<nil>
	W1025 10:13:30.486741  308083 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:13:30.490037  308083 out.go:252] * Restarting existing docker container for "ha-480889-m02" ...
	I1025 10:13:30.490126  308083 cli_runner.go:164] Run: docker start ha-480889-m02
	I1025 10:13:30.892304  308083 cli_runner.go:164] Run: docker container inspect ha-480889-m02 --format={{.State.Status}}
	I1025 10:13:30.928214  308083 kic.go:430] container "ha-480889-m02" state is running.
	I1025 10:13:30.928591  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m02
	I1025 10:13:30.962308  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:13:30.962572  308083 machine.go:93] provisionDockerMachine start ...
	I1025 10:13:30.962636  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:30.991814  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:30.992103  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1025 10:13:30.992112  308083 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:13:30.992798  308083 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53254->127.0.0.1:33178: read: connection reset by peer
	I1025 10:13:34.218384  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-480889-m02
	
	I1025 10:13:34.218468  308083 ubuntu.go:182] provisioning hostname "ha-480889-m02"
	I1025 10:13:34.218568  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:34.242087  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:34.242402  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1025 10:13:34.242413  308083 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-480889-m02 && echo "ha-480889-m02" | sudo tee /etc/hostname
	I1025 10:13:34.553498  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-480889-m02
	
	I1025 10:13:34.553579  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:34.605778  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:34.606154  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1025 10:13:34.606179  308083 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-480889-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-480889-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-480889-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:13:34.786380  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:13:34.786405  308083 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:13:34.786423  308083 ubuntu.go:190] setting up certificates
	I1025 10:13:34.786433  308083 provision.go:84] configureAuth start
	I1025 10:13:34.786494  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m02
	I1025 10:13:34.812196  308083 provision.go:143] copyHostCerts
	I1025 10:13:34.812238  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:13:34.812271  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:13:34.812277  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:13:34.812354  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:13:34.812427  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:13:34.812443  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:13:34.812448  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:13:34.812473  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:13:34.812508  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:13:34.812524  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:13:34.812528  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:13:34.812550  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:13:34.812594  308083 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.ha-480889-m02 san=[127.0.0.1 192.168.49.3 ha-480889-m02 localhost minikube]
	I1025 10:13:35.433499  308083 provision.go:177] copyRemoteCerts
	I1025 10:13:35.437355  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:13:35.437432  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:35.478086  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m02/id_rsa Username:docker}
	I1025 10:13:35.600269  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 10:13:35.600335  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:13:35.625245  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 10:13:35.625308  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 10:13:35.656095  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 10:13:35.656153  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:13:35.702462  308083 provision.go:87] duration metric: took 916.014065ms to configureAuth
	I1025 10:13:35.702539  308083 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:13:35.702849  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:35.703008  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:35.743726  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:35.744035  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1025 10:13:35.744050  308083 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:13:36.131741  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:13:36.131816  308083 machine.go:96] duration metric: took 5.16923304s to provisionDockerMachine
	I1025 10:13:36.131850  308083 start.go:293] postStartSetup for "ha-480889-m02" (driver="docker")
	I1025 10:13:36.131900  308083 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:13:36.132016  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:13:36.132089  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:36.151273  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m02/id_rsa Username:docker}
	I1025 10:13:36.257973  308083 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:13:36.261457  308083 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:13:36.261487  308083 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:13:36.261499  308083 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:13:36.261552  308083 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:13:36.261635  308083 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:13:36.261648  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> /etc/ssl/certs/2612562.pem
	I1025 10:13:36.261749  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:13:36.269152  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:13:36.286996  308083 start.go:296] duration metric: took 155.094351ms for postStartSetup
	I1025 10:13:36.287074  308083 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:13:36.287145  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:36.305008  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m02/id_rsa Username:docker}
	I1025 10:13:36.411951  308083 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:13:36.420078  308083 fix.go:56] duration metric: took 5.984184266s for fixHost
	I1025 10:13:36.420100  308083 start.go:83] releasing machines lock for "ha-480889-m02", held for 5.984233964s
	I1025 10:13:36.420167  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m02
	I1025 10:13:36.443663  308083 out.go:179] * Found network options:
	I1025 10:13:36.446961  308083 out.go:179]   - NO_PROXY=192.168.49.2
	W1025 10:13:36.450808  308083 proxy.go:120] fail to check proxy env: Error ip not in block
	W1025 10:13:36.450851  308083 proxy.go:120] fail to check proxy env: Error ip not in block
	I1025 10:13:36.450943  308083 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:13:36.450993  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:36.451266  308083 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:13:36.451340  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:36.496453  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m02/id_rsa Username:docker}
	I1025 10:13:36.500270  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m02/id_rsa Username:docker}
	I1025 10:13:36.756746  308083 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:13:36.868709  308083 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:13:36.868786  308083 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:13:36.881721  308083 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:13:36.881748  308083 start.go:495] detecting cgroup driver to use...
	I1025 10:13:36.881782  308083 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:13:36.881843  308083 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:13:36.907834  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:13:36.928826  308083 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:13:36.928911  308083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:13:36.951297  308083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:13:36.978500  308083 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:13:37.180812  308083 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:13:37.373723  308083 docker.go:234] disabling docker service ...
	I1025 10:13:37.373791  308083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:13:37.390746  308083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:13:37.405594  308083 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:13:37.625534  308083 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:13:37.834157  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:13:37.849602  308083 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:13:37.879998  308083 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:13:37.880065  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.894893  308083 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:13:37.894974  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.912955  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.922956  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.937706  308083 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:13:37.948806  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.959464  308083 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.972181  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.983464  308083 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:13:38.003743  308083 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:13:38.037815  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:38.334072  308083 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:13:39.163742  308083 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:13:39.163831  308083 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:13:39.169004  308083 start.go:563] Will wait 60s for crictl version
	I1025 10:13:39.169072  308083 ssh_runner.go:195] Run: which crictl
	I1025 10:13:39.173735  308083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:13:39.204784  308083 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:13:39.204890  308083 ssh_runner.go:195] Run: crio --version
	I1025 10:13:39.239278  308083 ssh_runner.go:195] Run: crio --version
	I1025 10:13:39.276711  308083 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:13:39.279715  308083 out.go:179]   - env NO_PROXY=192.168.49.2
	I1025 10:13:39.282816  308083 cli_runner.go:164] Run: docker network inspect ha-480889 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:13:39.299629  308083 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 10:13:39.303856  308083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:13:39.314044  308083 mustload.go:65] Loading cluster: ha-480889
	I1025 10:13:39.314294  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:39.314598  308083 cli_runner.go:164] Run: docker container inspect ha-480889 --format={{.State.Status}}
	I1025 10:13:39.343892  308083 host.go:66] Checking if "ha-480889" exists ...
	I1025 10:13:39.344182  308083 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889 for IP: 192.168.49.3
	I1025 10:13:39.344197  308083 certs.go:195] generating shared ca certs ...
	I1025 10:13:39.344211  308083 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:39.344335  308083 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:13:39.344393  308083 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:13:39.344406  308083 certs.go:257] generating profile certs ...
	I1025 10:13:39.344480  308083 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.key
	I1025 10:13:39.344547  308083 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key.1eaed255
	I1025 10:13:39.344593  308083 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key
	I1025 10:13:39.344606  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 10:13:39.344620  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 10:13:39.344636  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 10:13:39.344647  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 10:13:39.344663  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1025 10:13:39.344687  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1025 10:13:39.344718  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1025 10:13:39.344732  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1025 10:13:39.344792  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:13:39.344825  308083 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:13:39.344838  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:13:39.344861  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:13:39.344888  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:13:39.344914  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:13:39.344981  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:13:39.345016  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> /usr/share/ca-certificates/2612562.pem
	I1025 10:13:39.345034  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:39.345045  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem -> /usr/share/ca-certificates/261256.pem
	I1025 10:13:39.345112  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:39.371934  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:13:39.470344  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1025 10:13:39.483516  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1025 10:13:39.501845  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1025 10:13:39.507200  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1025 10:13:39.527252  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1025 10:13:39.532933  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1025 10:13:39.549399  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1025 10:13:39.554586  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1025 10:13:39.570659  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1025 10:13:39.574962  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1025 10:13:39.584673  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1025 10:13:39.589172  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1025 10:13:39.598913  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:13:39.620680  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:13:39.644461  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:13:39.668589  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:13:39.692311  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 10:13:39.712807  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:13:39.739124  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:13:39.767676  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:13:39.790850  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:13:39.811105  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:13:39.833707  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:13:39.856043  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1025 10:13:39.869628  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1025 10:13:39.883404  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1025 10:13:39.897013  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1025 10:13:39.919485  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1025 10:13:39.945523  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1025 10:13:39.967210  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1025 10:13:39.994983  308083 ssh_runner.go:195] Run: openssl version
	I1025 10:13:40.002778  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:13:40.017144  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:13:40.022850  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:13:40.022982  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:13:40.073080  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:13:40.081683  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:13:40.090847  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:13:40.096142  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:13:40.096266  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:13:40.138985  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:13:40.147554  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:13:40.156382  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:40.161029  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:40.161195  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:40.202792  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:13:40.211314  308083 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:13:40.215961  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:13:40.258002  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:13:40.301047  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:13:40.349624  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:13:40.395242  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:13:40.444494  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:13:40.496874  308083 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1025 10:13:40.496975  308083 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-480889-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:13:40.497007  308083 kube-vip.go:115] generating kube-vip config ...
	I1025 10:13:40.497062  308083 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1025 10:13:40.539654  308083 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:13:40.539717  308083 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1025 10:13:40.539780  308083 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:13:40.558469  308083 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:13:40.558603  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1025 10:13:40.566867  308083 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1025 10:13:40.583436  308083 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:13:40.596901  308083 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1025 10:13:40.612066  308083 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1025 10:13:40.616047  308083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:13:40.627164  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:40.770079  308083 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:13:40.784212  308083 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:13:40.784687  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:40.790656  308083 out.go:179] * Verifying Kubernetes components...
	I1025 10:13:40.793379  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:40.919442  308083 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:13:40.934315  308083 kapi.go:59] client config for ha-480889: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.key", CAFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1025 10:13:40.934388  308083 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1025 10:13:40.936607  308083 node_ready.go:35] waiting up to 6m0s for node "ha-480889-m02" to be "Ready" ...
	I1025 10:14:03.978798  308083 node_ready.go:49] node "ha-480889-m02" is "Ready"
	I1025 10:14:03.978827  308083 node_ready.go:38] duration metric: took 23.042187504s for node "ha-480889-m02" to be "Ready" ...
	I1025 10:14:03.978841  308083 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:14:03.978901  308083 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:14:04.002008  308083 api_server.go:72] duration metric: took 23.217688145s to wait for apiserver process to appear ...
	I1025 10:14:04.002035  308083 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:14:04.002057  308083 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 10:14:04.065805  308083 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:14:04.065839  308083 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:14:04.502158  308083 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 10:14:04.511711  308083 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:14:04.511802  308083 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:14:05.002194  308083 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 10:14:05.013361  308083 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:14:05.013506  308083 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:14:05.503134  308083 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 10:14:05.514732  308083 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1025 10:14:05.518544  308083 api_server.go:141] control plane version: v1.34.1
	I1025 10:14:05.518622  308083 api_server.go:131] duration metric: took 1.516578961s to wait for apiserver health ...
	I1025 10:14:05.518646  308083 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:14:05.535848  308083 system_pods.go:59] 26 kube-system pods found
	I1025 10:14:05.535941  308083 system_pods.go:61] "coredns-66bc5c9577-ctnsn" [4c76c01c-15ed-4930-ac1a-1e2bf7de3961] Running
	I1025 10:14:05.535963  308083 system_pods.go:61] "coredns-66bc5c9577-h4lrc" [ade89685-c5d2-4e4e-847d-7af6cb3fb862] Running
	I1025 10:14:05.535986  308083 system_pods.go:61] "etcd-ha-480889" [e343e174-731b-4eb7-97df-0220f254bfcf] Running
	I1025 10:14:05.536032  308083 system_pods.go:61] "etcd-ha-480889-m02" [52f56789-d8bf-4251-9316-a0b572f65125] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:14:05.536059  308083 system_pods.go:61] "etcd-ha-480889-m03" [7fb90646-4b60-4cc2-a527-c7e563bb182b] Running
	I1025 10:14:05.536100  308083 system_pods.go:61] "kindnet-227ts" [c2c62be9-5d6e-4a43-9eff-9a7e220282d7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 10:14:05.536125  308083 system_pods.go:61] "kindnet-2fqxj" [da4ef885-af3d-4ee3-9151-cdca0253c911] Running
	I1025 10:14:05.536154  308083 system_pods.go:61] "kindnet-8fgmd" [13833b7e-6794-4f30-8bec-20375bd481f2] Running
	I1025 10:14:05.536192  308083 system_pods.go:61] "kindnet-92p8z" [c1f4d260-381c-42d8-a8a5-77ae60cf42c6] Running
	I1025 10:14:05.536214  308083 system_pods.go:61] "kube-apiserver-ha-480889" [3f293b6b-7247-48a0-aa80-508696bea727] Running
	I1025 10:14:05.536251  308083 system_pods.go:61] "kube-apiserver-ha-480889-m02" [faae5baa-e581-4254-b659-0687cfebfb67] Running
	I1025 10:14:05.536276  308083 system_pods.go:61] "kube-apiserver-ha-480889-m03" [f18f8a4d-22bd-48e4-9b23-e5383f2fce25] Running
	I1025 10:14:05.536299  308083 system_pods.go:61] "kube-controller-manager-ha-480889" [6c111362-d576-4cb0-b102-086f180ff7b7] Running
	I1025 10:14:05.536340  308083 system_pods.go:61] "kube-controller-manager-ha-480889-m02" [443192d3-d7a3-40c4-99bf-2a1eac354f88] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:14:05.536367  308083 system_pods.go:61] "kube-controller-manager-ha-480889-m03" [c5d29ad2-f161-4c39-9de4-35916c43e02b] Running
	I1025 10:14:05.536392  308083 system_pods.go:61] "kube-proxy-29hlq" [2c0b691f-c26f-49bd-9b8b-39819ca8539d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 10:14:05.536425  308083 system_pods.go:61] "kube-proxy-4d5ks" [058d38d9-4dec-40ff-ac68-9651d27ba0c6] Running
	I1025 10:14:05.536449  308083 system_pods.go:61] "kube-proxy-6x5rb" [e73b3f75-02d7-46e3-940c-ffd727e4c87d] Running
	I1025 10:14:05.536471  308083 system_pods.go:61] "kube-proxy-9rtcs" [6fd17399-e636-4de6-aa9c-e0e3d3656c41] Running
	I1025 10:14:05.536506  308083 system_pods.go:61] "kube-scheduler-ha-480889" [9036810d-dce1-4542-ac53-b5d70020809c] Running
	I1025 10:14:05.536532  308083 system_pods.go:61] "kube-scheduler-ha-480889-m02" [f4c7c190-55e0-4bbf-9c22-fe9b3d8fc98d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:14:05.536556  308083 system_pods.go:61] "kube-scheduler-ha-480889-m03" [fdcb0331-d8b0-4fb0-9549-459e365b5863] Running
	I1025 10:14:05.536591  308083 system_pods.go:61] "kube-vip-ha-480889" [07959933-b7f0-46ad-9fa2-d9c661db7882] Running
	I1025 10:14:05.536614  308083 system_pods.go:61] "kube-vip-ha-480889-m02" [fea939ce-de9c-446b-b961-37a72c945913] Running
	I1025 10:14:05.536639  308083 system_pods.go:61] "kube-vip-ha-480889-m03" [f2a5dbed-19e6-4092-8340-c798578dfd40] Running
	I1025 10:14:05.536679  308083 system_pods.go:61] "storage-provisioner" [15113825-bb63-434f-bd5e-2ffd789452d6] Running
	I1025 10:14:05.536705  308083 system_pods.go:74] duration metric: took 18.038599ms to wait for pod list to return data ...
	I1025 10:14:05.536727  308083 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:14:05.551153  308083 default_sa.go:45] found service account: "default"
	I1025 10:14:05.551231  308083 default_sa.go:55] duration metric: took 14.469512ms for default service account to be created ...
	I1025 10:14:05.551256  308083 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:14:05.562144  308083 system_pods.go:86] 26 kube-system pods found
	I1025 10:14:05.562232  308083 system_pods.go:89] "coredns-66bc5c9577-ctnsn" [4c76c01c-15ed-4930-ac1a-1e2bf7de3961] Running
	I1025 10:14:05.562257  308083 system_pods.go:89] "coredns-66bc5c9577-h4lrc" [ade89685-c5d2-4e4e-847d-7af6cb3fb862] Running
	I1025 10:14:05.562298  308083 system_pods.go:89] "etcd-ha-480889" [e343e174-731b-4eb7-97df-0220f254bfcf] Running
	I1025 10:14:05.562329  308083 system_pods.go:89] "etcd-ha-480889-m02" [52f56789-d8bf-4251-9316-a0b572f65125] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:14:05.562357  308083 system_pods.go:89] "etcd-ha-480889-m03" [7fb90646-4b60-4cc2-a527-c7e563bb182b] Running
	I1025 10:14:05.562400  308083 system_pods.go:89] "kindnet-227ts" [c2c62be9-5d6e-4a43-9eff-9a7e220282d7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 10:14:05.562424  308083 system_pods.go:89] "kindnet-2fqxj" [da4ef885-af3d-4ee3-9151-cdca0253c911] Running
	I1025 10:14:05.562452  308083 system_pods.go:89] "kindnet-8fgmd" [13833b7e-6794-4f30-8bec-20375bd481f2] Running
	I1025 10:14:05.562486  308083 system_pods.go:89] "kindnet-92p8z" [c1f4d260-381c-42d8-a8a5-77ae60cf42c6] Running
	I1025 10:14:05.562513  308083 system_pods.go:89] "kube-apiserver-ha-480889" [3f293b6b-7247-48a0-aa80-508696bea727] Running
	I1025 10:14:05.562563  308083 system_pods.go:89] "kube-apiserver-ha-480889-m02" [faae5baa-e581-4254-b659-0687cfebfb67] Running
	I1025 10:14:05.562590  308083 system_pods.go:89] "kube-apiserver-ha-480889-m03" [f18f8a4d-22bd-48e4-9b23-e5383f2fce25] Running
	I1025 10:14:05.562616  308083 system_pods.go:89] "kube-controller-manager-ha-480889" [6c111362-d576-4cb0-b102-086f180ff7b7] Running
	I1025 10:14:05.562658  308083 system_pods.go:89] "kube-controller-manager-ha-480889-m02" [443192d3-d7a3-40c4-99bf-2a1eac354f88] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:14:05.562685  308083 system_pods.go:89] "kube-controller-manager-ha-480889-m03" [c5d29ad2-f161-4c39-9de4-35916c43e02b] Running
	I1025 10:14:05.562729  308083 system_pods.go:89] "kube-proxy-29hlq" [2c0b691f-c26f-49bd-9b8b-39819ca8539d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 10:14:05.562755  308083 system_pods.go:89] "kube-proxy-4d5ks" [058d38d9-4dec-40ff-ac68-9651d27ba0c6] Running
	I1025 10:14:05.562843  308083 system_pods.go:89] "kube-proxy-6x5rb" [e73b3f75-02d7-46e3-940c-ffd727e4c87d] Running
	I1025 10:14:05.562883  308083 system_pods.go:89] "kube-proxy-9rtcs" [6fd17399-e636-4de6-aa9c-e0e3d3656c41] Running
	I1025 10:14:05.562903  308083 system_pods.go:89] "kube-scheduler-ha-480889" [9036810d-dce1-4542-ac53-b5d70020809c] Running
	I1025 10:14:05.562928  308083 system_pods.go:89] "kube-scheduler-ha-480889-m02" [f4c7c190-55e0-4bbf-9c22-fe9b3d8fc98d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:14:05.562965  308083 system_pods.go:89] "kube-scheduler-ha-480889-m03" [fdcb0331-d8b0-4fb0-9549-459e365b5863] Running
	I1025 10:14:05.562991  308083 system_pods.go:89] "kube-vip-ha-480889" [07959933-b7f0-46ad-9fa2-d9c661db7882] Running
	I1025 10:14:05.563016  308083 system_pods.go:89] "kube-vip-ha-480889-m02" [fea939ce-de9c-446b-b961-37a72c945913] Running
	I1025 10:14:05.563070  308083 system_pods.go:89] "kube-vip-ha-480889-m03" [f2a5dbed-19e6-4092-8340-c798578dfd40] Running
	I1025 10:14:05.563096  308083 system_pods.go:89] "storage-provisioner" [15113825-bb63-434f-bd5e-2ffd789452d6] Running
	I1025 10:14:05.563122  308083 system_pods.go:126] duration metric: took 11.844458ms to wait for k8s-apps to be running ...
	I1025 10:14:05.563161  308083 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:14:05.563251  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:14:05.583878  308083 system_svc.go:56] duration metric: took 20.700093ms WaitForService to wait for kubelet
	I1025 10:14:05.583959  308083 kubeadm.go:586] duration metric: took 24.799662385s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:14:05.584013  308083 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:14:05.602014  308083 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:14:05.602101  308083 node_conditions.go:123] node cpu capacity is 2
	I1025 10:14:05.602129  308083 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:14:05.602149  308083 node_conditions.go:123] node cpu capacity is 2
	I1025 10:14:05.602183  308083 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:14:05.602208  308083 node_conditions.go:123] node cpu capacity is 2
	I1025 10:14:05.602232  308083 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:14:05.602268  308083 node_conditions.go:123] node cpu capacity is 2
	I1025 10:14:05.602294  308083 node_conditions.go:105] duration metric: took 18.245402ms to run NodePressure ...
	I1025 10:14:05.602322  308083 start.go:241] waiting for startup goroutines ...
	I1025 10:14:05.602372  308083 start.go:255] writing updated cluster config ...
	I1025 10:14:05.606107  308083 out.go:203] 
	I1025 10:14:05.609375  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:14:05.609570  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:14:05.612923  308083 out.go:179] * Starting "ha-480889-m03" control-plane node in "ha-480889" cluster
	I1025 10:14:05.616650  308083 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:14:05.619578  308083 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:14:05.622647  308083 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:14:05.622723  308083 cache.go:58] Caching tarball of preloaded images
	I1025 10:14:05.622730  308083 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:14:05.622888  308083 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:14:05.622906  308083 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:14:05.623058  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:14:05.644689  308083 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:14:05.644714  308083 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:14:05.644728  308083 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:14:05.644760  308083 start.go:360] acquireMachinesLock for ha-480889-m03: {Name:mkdc7aead07cc61c4483ca641c0f901f32cc9e0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:14:05.644832  308083 start.go:364] duration metric: took 40.6µs to acquireMachinesLock for "ha-480889-m03"
	I1025 10:14:05.644859  308083 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:14:05.644869  308083 fix.go:54] fixHost starting: m03
	I1025 10:14:05.645136  308083 cli_runner.go:164] Run: docker container inspect ha-480889-m03 --format={{.State.Status}}
	I1025 10:14:05.665455  308083 fix.go:112] recreateIfNeeded on ha-480889-m03: state=Stopped err=<nil>
	W1025 10:14:05.665482  308083 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:14:05.668964  308083 out.go:252] * Restarting existing docker container for "ha-480889-m03" ...
	I1025 10:14:05.669067  308083 cli_runner.go:164] Run: docker start ha-480889-m03
	I1025 10:14:06.010869  308083 cli_runner.go:164] Run: docker container inspect ha-480889-m03 --format={{.State.Status}}
	I1025 10:14:06.033631  308083 kic.go:430] container "ha-480889-m03" state is running.
	I1025 10:14:06.034025  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m03
	I1025 10:14:06.062398  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:14:06.062842  308083 machine.go:93] provisionDockerMachine start ...
	I1025 10:14:06.062924  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:06.096711  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:14:06.097013  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1025 10:14:06.097022  308083 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:14:06.100286  308083 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44394->127.0.0.1:33183: read: connection reset by peer
	I1025 10:14:09.422447  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-480889-m03
	
	I1025 10:14:09.422528  308083 ubuntu.go:182] provisioning hostname "ha-480889-m03"
	I1025 10:14:09.422611  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:09.454682  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:14:09.454994  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1025 10:14:09.455005  308083 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-480889-m03 && echo "ha-480889-m03" | sudo tee /etc/hostname
	I1025 10:14:09.716055  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-480889-m03
	
	I1025 10:14:09.716202  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:09.758198  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:14:09.758502  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1025 10:14:09.758518  308083 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-480889-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-480889-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-480889-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:14:09.952740  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:14:09.952771  308083 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:14:09.952843  308083 ubuntu.go:190] setting up certificates
	I1025 10:14:09.952854  308083 provision.go:84] configureAuth start
	I1025 10:14:09.952966  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m03
	I1025 10:14:10.002091  308083 provision.go:143] copyHostCerts
	I1025 10:14:10.002146  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:14:10.002194  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:14:10.002207  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:14:10.002336  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:14:10.002445  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:14:10.002473  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:14:10.002482  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:14:10.002512  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:14:10.002620  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:14:10.002645  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:14:10.002656  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:14:10.002686  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:14:10.002748  308083 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.ha-480889-m03 san=[127.0.0.1 192.168.49.4 ha-480889-m03 localhost minikube]
	I1025 10:14:10.250973  308083 provision.go:177] copyRemoteCerts
	I1025 10:14:10.251332  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:14:10.251408  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:10.289237  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m03/id_rsa Username:docker}
	I1025 10:14:10.436731  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 10:14:10.436797  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 10:14:10.544747  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 10:14:10.544817  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:14:10.630377  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 10:14:10.630464  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:14:10.673862  308083 provision.go:87] duration metric: took 720.988399ms to configureAuth
	I1025 10:14:10.673890  308083 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:14:10.674168  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:14:10.674521  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:10.707641  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:14:10.707938  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1025 10:14:10.707957  308083 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:14:11.154845  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:14:11.154927  308083 machine.go:96] duration metric: took 5.092069874s to provisionDockerMachine
	I1025 10:14:11.154954  308083 start.go:293] postStartSetup for "ha-480889-m03" (driver="docker")
	I1025 10:14:11.154994  308083 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:14:11.155090  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:14:11.155169  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:11.175592  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m03/id_rsa Username:docker}
	I1025 10:14:11.283365  308083 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:14:11.286806  308083 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:14:11.286877  308083 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:14:11.286905  308083 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:14:11.286994  308083 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:14:11.287123  308083 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:14:11.287171  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> /etc/ssl/certs/2612562.pem
	I1025 10:14:11.287295  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:14:11.295059  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:14:11.316093  308083 start.go:296] duration metric: took 161.095107ms for postStartSetup
	I1025 10:14:11.316217  308083 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:14:11.316276  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:11.333862  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m03/id_rsa Username:docker}
	I1025 10:14:11.435204  308083 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:14:11.440180  308083 fix.go:56] duration metric: took 5.79530454s for fixHost
	I1025 10:14:11.440241  308083 start.go:83] releasing machines lock for "ha-480889-m03", held for 5.795361279s
	I1025 10:14:11.440311  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m03
	I1025 10:14:11.464304  308083 out.go:179] * Found network options:
	I1025 10:14:11.467314  308083 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1025 10:14:11.470389  308083 proxy.go:120] fail to check proxy env: Error ip not in block
	W1025 10:14:11.470430  308083 proxy.go:120] fail to check proxy env: Error ip not in block
	W1025 10:14:11.470457  308083 proxy.go:120] fail to check proxy env: Error ip not in block
	W1025 10:14:11.470474  308083 proxy.go:120] fail to check proxy env: Error ip not in block
	I1025 10:14:11.470546  308083 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:14:11.470610  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:11.470888  308083 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:14:11.470954  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:11.492648  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m03/id_rsa Username:docker}
	I1025 10:14:11.500283  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m03/id_rsa Username:docker}
	I1025 10:14:11.796571  308083 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:14:11.919974  308083 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:14:11.920047  308083 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:14:11.930959  308083 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:14:11.931034  308083 start.go:495] detecting cgroup driver to use...
	I1025 10:14:11.931084  308083 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:14:11.931150  308083 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:14:11.976106  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:14:12.014574  308083 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:14:12.014688  308083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:14:12.063668  308083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:14:12.091979  308083 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:14:12.314959  308083 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:14:12.575887  308083 docker.go:234] disabling docker service ...
	I1025 10:14:12.575989  308083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:14:12.601545  308083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:14:12.619323  308083 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:14:12.867377  308083 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:14:13.108726  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:14:13.127994  308083 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:14:13.145943  308083 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:14:13.146033  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.156671  308083 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:14:13.156750  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.168655  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.184089  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.194894  308083 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:14:13.204315  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.214077  308083 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.224397  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.234566  308083 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:14:13.243678  308083 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:14:13.253013  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:14:13.493138  308083 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:15:43.813681  308083 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.320502184s)
	I1025 10:15:43.813712  308083 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:15:43.813771  308083 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:15:43.818284  308083 start.go:563] Will wait 60s for crictl version
	I1025 10:15:43.818348  308083 ssh_runner.go:195] Run: which crictl
	I1025 10:15:43.822612  308083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:15:43.849591  308083 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:15:43.849679  308083 ssh_runner.go:195] Run: crio --version
	I1025 10:15:43.881155  308083 ssh_runner.go:195] Run: crio --version
	I1025 10:15:43.916090  308083 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:15:43.919321  308083 out.go:179]   - env NO_PROXY=192.168.49.2
	I1025 10:15:43.922326  308083 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1025 10:15:43.925259  308083 cli_runner.go:164] Run: docker network inspect ha-480889 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:15:43.954223  308083 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 10:15:43.958732  308083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:15:43.969465  308083 mustload.go:65] Loading cluster: ha-480889
	I1025 10:15:43.969714  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:15:43.969954  308083 cli_runner.go:164] Run: docker container inspect ha-480889 --format={{.State.Status}}
	I1025 10:15:43.987361  308083 host.go:66] Checking if "ha-480889" exists ...
	I1025 10:15:43.987646  308083 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889 for IP: 192.168.49.4
	I1025 10:15:43.987660  308083 certs.go:195] generating shared ca certs ...
	I1025 10:15:43.987675  308083 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:15:43.987792  308083 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:15:43.987838  308083 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:15:43.987850  308083 certs.go:257] generating profile certs ...
	I1025 10:15:43.987924  308083 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.key
	I1025 10:15:43.987987  308083 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key.7d4a26e1
	I1025 10:15:43.988022  308083 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key
	I1025 10:15:43.988030  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 10:15:43.988044  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 10:15:43.988056  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 10:15:43.988066  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 10:15:43.988076  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1025 10:15:43.988088  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1025 10:15:43.988099  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1025 10:15:43.988111  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1025 10:15:43.988160  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:15:43.988188  308083 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:15:43.988197  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:15:43.988222  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:15:43.988244  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:15:43.988266  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:15:43.988306  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:15:43.988330  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> /usr/share/ca-certificates/2612562.pem
	I1025 10:15:43.988342  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:15:43.988353  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem -> /usr/share/ca-certificates/261256.pem
	I1025 10:15:43.988408  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:15:44.012522  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:15:44.114325  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1025 10:15:44.118993  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1025 10:15:44.127630  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1025 10:15:44.131303  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1025 10:15:44.140046  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1025 10:15:44.144492  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1025 10:15:44.154086  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1025 10:15:44.158181  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1025 10:15:44.167518  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1025 10:15:44.171723  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1025 10:15:44.181427  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1025 10:15:44.185332  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1025 10:15:44.194266  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:15:44.214098  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:15:44.234054  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:15:44.256195  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:15:44.279031  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 10:15:44.299344  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:15:44.323793  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:15:44.345417  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:15:44.365719  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:15:44.388245  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:15:44.408144  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:15:44.428098  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1025 10:15:44.441938  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1025 10:15:44.457102  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1025 10:15:44.471357  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1025 10:15:44.485615  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1025 10:15:44.498465  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1025 10:15:44.511910  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1025 10:15:44.531258  308083 ssh_runner.go:195] Run: openssl version
	I1025 10:15:44.540606  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:15:44.550354  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:15:44.554246  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:15:44.554361  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:15:44.602272  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:15:44.611902  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:15:44.622835  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:15:44.629226  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:15:44.629299  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:15:44.670883  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:15:44.679524  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:15:44.689802  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:15:44.693893  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:15:44.694068  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:15:44.735651  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:15:44.743736  308083 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:15:44.747896  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:15:44.790110  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:15:44.832406  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:15:44.874662  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:15:44.915849  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:15:44.959092  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:15:45.002430  308083 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1025 10:15:45.002579  308083 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-480889-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:15:45.002609  308083 kube-vip.go:115] generating kube-vip config ...
	I1025 10:15:45.002683  308083 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1025 10:15:45.029854  308083 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:15:45.029925  308083 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1025 10:15:45.030057  308083 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:15:45.063539  308083 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:15:45.063684  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1025 10:15:45.095087  308083 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1025 10:15:45.131847  308083 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:15:45.152140  308083 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1025 10:15:45.177067  308083 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1025 10:15:45.183642  308083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:15:45.224794  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:15:45.476283  308083 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:15:45.492420  308083 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:15:45.492955  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:15:45.496345  308083 out.go:179] * Verifying Kubernetes components...
	I1025 10:15:45.499247  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:15:45.679197  308083 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:15:45.698347  308083 kapi.go:59] client config for ha-480889: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.key", CAFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1025 10:15:45.698425  308083 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1025 10:15:45.698682  308083 node_ready.go:35] waiting up to 6m0s for node "ha-480889-m03" to be "Ready" ...
	W1025 10:15:47.704097  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:15:50.202392  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:15:52.202756  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:15:54.702519  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:15:56.703376  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:15:59.202538  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:01.203022  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:03.702456  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:05.702876  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:08.203621  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:10.702751  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:12.702907  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:14.703027  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:17.202640  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:19.702153  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:21.702531  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:24.202537  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:26.203031  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:28.703368  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:31.203812  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:33.702780  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:35.702906  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:38.202338  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:40.203167  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:42.702490  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:44.702835  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:47.202196  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:49.202526  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:51.202870  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:53.702683  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:55.703197  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:58.202338  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:00.206336  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:02.702377  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:04.702956  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:06.703174  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:09.203441  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:11.702806  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:13.710569  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:16.202672  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:18.702234  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:20.702880  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:23.202095  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:25.702246  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:28.202837  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:30.702454  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:33.202247  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:35.202785  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:37.203762  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:39.204260  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:41.702127  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:43.702287  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:45.703093  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:48.201849  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:50.202459  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:52.702854  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:55.202331  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:57.203185  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:59.702528  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:01.703331  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:04.202726  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:06.703053  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:09.204373  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:11.701897  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:13.702021  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:15.702198  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:17.702881  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:19.703383  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:22.202517  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:24.702492  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:26.702790  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:28.703087  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:30.703165  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:33.201850  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:35.202988  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:37.702399  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:40.203419  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:42.706153  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:45.204630  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:47.702150  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:49.702926  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:52.202337  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:54.202520  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:56.205653  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:58.703493  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:01.202899  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:03.703116  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:06.202631  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:08.702752  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:11.202319  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:13.202983  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:15.702637  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:17.702727  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:19.703012  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:22.202503  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:24.202611  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:26.203025  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:28.703023  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:30.704166  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:33.202334  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:35.202526  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:37.702970  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:40.202164  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:42.209403  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:44.702793  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:47.202837  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:49.203091  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:51.702900  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:54.202265  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:56.202768  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:58.701910  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:00.709849  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:03.202825  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:05.202888  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:07.203134  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:09.702506  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:11.702866  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:13.703432  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:16.203193  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:18.702327  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:21.202730  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:23.701909  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:25.702331  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:28.203063  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:30.702507  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:33.202600  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:35.701600  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:37.705510  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:40.203102  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:42.203440  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:44.702386  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:46.703047  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:49.202068  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:51.202968  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:53.701922  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:56.202876  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:58.702653  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:00.702696  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:03.202988  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:05.702469  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:08.203875  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:10.702312  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:12.702439  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:15.202672  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:17.203173  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:19.702749  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:22.202730  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:24.211889  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:26.702317  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:28.703052  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:31.202771  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:33.702614  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:35.702649  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:37.702884  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:40.202737  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:42.203329  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:44.702141  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	I1025 10:21:45.699723  308083 node_ready.go:38] duration metric: took 6m0.00101372s for node "ha-480889-m03" to be "Ready" ...
	I1025 10:21:45.702936  308083 out.go:203] 
	W1025 10:21:45.705812  308083 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1025 10:21:45.705837  308083 out.go:285] * 
	* 
	W1025 10:21:45.708064  308083 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 10:21:45.711065  308083 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-arm64 -p ha-480889 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-480889
helpers_test.go:243: (dbg) docker inspect ha-480889:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb",
	        "Created": "2025-10-25T10:07:16.735876836Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 308208,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:13:21.399696936Z",
	            "FinishedAt": "2025-10-25T10:13:20.79843666Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb/hostname",
	        "HostsPath": "/var/lib/docker/containers/808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb/hosts",
	        "LogPath": "/var/lib/docker/containers/808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb/808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb-json.log",
	        "Name": "/ha-480889",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-480889:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-480889",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb",
	                "LowerDir": "/var/lib/docker/overlay2/d159db5e3fba2acaf2be751adfd990d9559a06fe4315850b3c072a95af080135-init/diff:/var/lib/docker/overlay2/a746402fafa86965bd9394d930bb96ec16982037f9f5e7f4f1de6d09576ff851/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d159db5e3fba2acaf2be751adfd990d9559a06fe4315850b3c072a95af080135/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d159db5e3fba2acaf2be751adfd990d9559a06fe4315850b3c072a95af080135/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d159db5e3fba2acaf2be751adfd990d9559a06fe4315850b3c072a95af080135/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-480889",
	                "Source": "/var/lib/docker/volumes/ha-480889/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-480889",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-480889",
	                "name.minikube.sigs.k8s.io": "ha-480889",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "791d4899d5afa7873aa61454e9b98c6bf4cae328e5fac1d61bfb6966ee8cf636",
	            "SandboxKey": "/var/run/docker/netns/791d4899d5af",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33177"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33176"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-480889": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:5c:03:eb:9b:24",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2218a4d410c8591103e2cd6973cfcc03970e864955c570ceafd8b830a42f8a91",
	                    "EndpointID": "f005f7f20c8dfee253108089d9a6288d3bd36c3e1a48e0821c1ab3d225d34362",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-480889",
	                        "808d21fd84e3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-480889 -n ha-480889
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-480889 logs -n 25: (1.482836286s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-480889 cp ha-480889-m03:/home/docker/cp-test.txt ha-480889-m02:/home/docker/cp-test_ha-480889-m03_ha-480889-m02.txt               │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m02 sudo cat /home/docker/cp-test_ha-480889-m03_ha-480889-m02.txt                                         │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ cp      │ ha-480889 cp ha-480889-m03:/home/docker/cp-test.txt ha-480889-m04:/home/docker/cp-test_ha-480889-m03_ha-480889-m04.txt               │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m04 sudo cat /home/docker/cp-test_ha-480889-m03_ha-480889-m04.txt                                         │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ cp      │ ha-480889 cp testdata/cp-test.txt ha-480889-m04:/home/docker/cp-test.txt                                                             │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ cp      │ ha-480889 cp ha-480889-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3016407791/001/cp-test_ha-480889-m04.txt │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ cp      │ ha-480889 cp ha-480889-m04:/home/docker/cp-test.txt ha-480889:/home/docker/cp-test_ha-480889-m04_ha-480889.txt                       │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889 sudo cat /home/docker/cp-test_ha-480889-m04_ha-480889.txt                                                 │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ cp      │ ha-480889 cp ha-480889-m04:/home/docker/cp-test.txt ha-480889-m02:/home/docker/cp-test_ha-480889-m04_ha-480889-m02.txt               │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m02 sudo cat /home/docker/cp-test_ha-480889-m04_ha-480889-m02.txt                                         │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ cp      │ ha-480889 cp ha-480889-m04:/home/docker/cp-test.txt ha-480889-m03:/home/docker/cp-test_ha-480889-m04_ha-480889-m03.txt               │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m03 sudo cat /home/docker/cp-test_ha-480889-m04_ha-480889-m03.txt                                         │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ node    │ ha-480889 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ node    │ ha-480889 node start m02 --alsologtostderr -v 5                                                                                      │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ node    │ ha-480889 node list --alsologtostderr -v 5                                                                                           │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ stop    │ ha-480889 stop --alsologtostderr -v 5                                                                                                │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:13 UTC │
	│ start   │ ha-480889 start --wait true --alsologtostderr -v 5                                                                                   │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │                     │
	│ node    │ ha-480889 node list --alsologtostderr -v 5                                                                                           │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:13:21
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:13:21.133168  308083 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:13:21.133290  308083 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:13:21.133303  308083 out.go:374] Setting ErrFile to fd 2...
	I1025 10:13:21.133309  308083 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:13:21.133562  308083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:13:21.133919  308083 out.go:368] Setting JSON to false
	I1025 10:13:21.134805  308083 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6953,"bootTime":1761380249,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 10:13:21.134877  308083 start.go:141] virtualization:  
	I1025 10:13:21.140316  308083 out.go:179] * [ha-480889] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:13:21.143327  308083 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:13:21.143404  308083 notify.go:220] Checking for updates...
	I1025 10:13:21.149301  308083 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:13:21.152089  308083 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:13:21.154925  308083 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 10:13:21.157773  308083 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:13:21.160618  308083 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:13:21.164113  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:21.164223  308083 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:13:21.197583  308083 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:13:21.197765  308083 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:13:21.253016  308083 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-25 10:13:21.243524818 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:13:21.253128  308083 docker.go:318] overlay module found
	I1025 10:13:21.256213  308083 out.go:179] * Using the docker driver based on existing profile
	I1025 10:13:21.259079  308083 start.go:305] selected driver: docker
	I1025 10:13:21.259120  308083 start.go:925] validating driver "docker" against &{Name:ha-480889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:13:21.259253  308083 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:13:21.259348  308083 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:13:21.316248  308083 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-25 10:13:21.30638419 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:13:21.316658  308083 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:13:21.316688  308083 cni.go:84] Creating CNI manager for ""
	I1025 10:13:21.316750  308083 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1025 10:13:21.316803  308083 start.go:349] cluster config:
	{Name:ha-480889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:13:21.320059  308083 out.go:179] * Starting "ha-480889" primary control-plane node in "ha-480889" cluster
	I1025 10:13:21.322881  308083 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:13:21.325849  308083 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:13:21.328624  308083 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:13:21.328676  308083 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 10:13:21.328688  308083 cache.go:58] Caching tarball of preloaded images
	I1025 10:13:21.328730  308083 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:13:21.328805  308083 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:13:21.328816  308083 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:13:21.328961  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:13:21.348972  308083 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:13:21.348996  308083 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:13:21.349014  308083 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:13:21.349046  308083 start.go:360] acquireMachinesLock for ha-480889: {Name:mk41781a5f7df8ed38323f26b29dd3de0536d841 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:13:21.349099  308083 start.go:364] duration metric: took 35.972µs to acquireMachinesLock for "ha-480889"
	I1025 10:13:21.349123  308083 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:13:21.349129  308083 fix.go:54] fixHost starting: 
	I1025 10:13:21.349386  308083 cli_runner.go:164] Run: docker container inspect ha-480889 --format={{.State.Status}}
	I1025 10:13:21.366278  308083 fix.go:112] recreateIfNeeded on ha-480889: state=Stopped err=<nil>
	W1025 10:13:21.366311  308083 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:13:21.369548  308083 out.go:252] * Restarting existing docker container for "ha-480889" ...
	I1025 10:13:21.369634  308083 cli_runner.go:164] Run: docker start ha-480889
	I1025 10:13:21.622973  308083 cli_runner.go:164] Run: docker container inspect ha-480889 --format={{.State.Status}}
	I1025 10:13:21.639685  308083 kic.go:430] container "ha-480889" state is running.
	I1025 10:13:21.640060  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889
	I1025 10:13:21.659744  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:13:21.659977  308083 machine.go:93] provisionDockerMachine start ...
	I1025 10:13:21.660037  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:21.679901  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:21.680217  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1025 10:13:21.680227  308083 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:13:21.681077  308083 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37726->127.0.0.1:33173: read: connection reset by peer
	I1025 10:13:24.829722  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-480889
	
	I1025 10:13:24.829748  308083 ubuntu.go:182] provisioning hostname "ha-480889"
	I1025 10:13:24.829819  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:24.848138  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:24.848455  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1025 10:13:24.848472  308083 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-480889 && echo "ha-480889" | sudo tee /etc/hostname
	I1025 10:13:25.012654  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-480889
	
	I1025 10:13:25.012743  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:25.031520  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:25.031847  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1025 10:13:25.031875  308083 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-480889' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-480889/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-480889' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:13:25.182388  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:13:25.182461  308083 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:13:25.182530  308083 ubuntu.go:190] setting up certificates
	I1025 10:13:25.182567  308083 provision.go:84] configureAuth start
	I1025 10:13:25.182666  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889
	I1025 10:13:25.200092  308083 provision.go:143] copyHostCerts
	I1025 10:13:25.200133  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:13:25.200165  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:13:25.200172  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:13:25.200245  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:13:25.200331  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:13:25.200352  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:13:25.200357  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:13:25.200382  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:13:25.200423  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:13:25.200438  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:13:25.200442  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:13:25.200464  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:13:25.200507  308083 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.ha-480889 san=[127.0.0.1 192.168.49.2 ha-480889 localhost minikube]
	I1025 10:13:25.925035  308083 provision.go:177] copyRemoteCerts
	I1025 10:13:25.925106  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:13:25.925148  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:25.941975  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:13:26.046168  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 10:13:26.046249  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:13:26.065892  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 10:13:26.065964  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1025 10:13:26.086519  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 10:13:26.086582  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:13:26.105106  308083 provision.go:87] duration metric: took 922.501142ms to configureAuth
	I1025 10:13:26.105133  308083 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:13:26.105365  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:26.105486  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:26.123735  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:26.124045  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1025 10:13:26.124102  308083 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:13:26.451879  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:13:26.451953  308083 machine.go:96] duration metric: took 4.791965867s to provisionDockerMachine
	I1025 10:13:26.451985  308083 start.go:293] postStartSetup for "ha-480889" (driver="docker")
	I1025 10:13:26.452035  308083 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:13:26.452145  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:13:26.452222  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:26.474611  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:13:26.586070  308083 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:13:26.589442  308083 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:13:26.589480  308083 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:13:26.589492  308083 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:13:26.589557  308083 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:13:26.589654  308083 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:13:26.589667  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> /etc/ssl/certs/2612562.pem
	I1025 10:13:26.589769  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:13:26.597470  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:13:26.615616  308083 start.go:296] duration metric: took 163.578765ms for postStartSetup
	I1025 10:13:26.615697  308083 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:13:26.615759  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:26.632968  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:13:26.735211  308083 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:13:26.740030  308083 fix.go:56] duration metric: took 5.390893179s for fixHost
	I1025 10:13:26.740056  308083 start.go:83] releasing machines lock for "ha-480889", held for 5.390944264s
	I1025 10:13:26.740127  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889
	I1025 10:13:26.756884  308083 ssh_runner.go:195] Run: cat /version.json
	I1025 10:13:26.756940  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:26.756964  308083 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:13:26.757017  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:26.775539  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:13:26.778199  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:13:26.873785  308083 ssh_runner.go:195] Run: systemctl --version
	I1025 10:13:26.965654  308083 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:13:27.005417  308083 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:13:27.010728  308083 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:13:27.010810  308083 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:13:27.019133  308083 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:13:27.019158  308083 start.go:495] detecting cgroup driver to use...
	I1025 10:13:27.019210  308083 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:13:27.019280  308083 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:13:27.034337  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:13:27.047938  308083 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:13:27.048000  308083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:13:27.063832  308083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:13:27.081381  308083 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:13:27.198834  308083 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:13:27.303413  308083 docker.go:234] disabling docker service ...
	I1025 10:13:27.303534  308083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:13:27.318254  308083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:13:27.331149  308083 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:13:27.440477  308083 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:13:27.554598  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:13:27.567225  308083 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:13:27.581183  308083 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:13:27.581264  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.590278  308083 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:13:27.590389  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.599250  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.607897  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.616848  308083 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:13:27.625132  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.634834  308083 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.643393  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.653830  308083 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:13:27.661579  308083 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:13:27.669371  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:27.781686  308083 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:13:27.909770  308083 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:13:27.909891  308083 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:13:27.913604  308083 start.go:563] Will wait 60s for crictl version
	I1025 10:13:27.913677  308083 ssh_runner.go:195] Run: which crictl
	I1025 10:13:27.917354  308083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:13:27.943799  308083 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:13:27.943944  308083 ssh_runner.go:195] Run: crio --version
	I1025 10:13:27.972380  308083 ssh_runner.go:195] Run: crio --version
	I1025 10:13:28.006726  308083 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:13:28.009638  308083 cli_runner.go:164] Run: docker network inspect ha-480889 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:13:28.029757  308083 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 10:13:28.033806  308083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:13:28.045238  308083 kubeadm.go:883] updating cluster {Name:ha-480889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:13:28.046168  308083 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:13:28.046264  308083 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:13:28.081721  308083 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:13:28.081747  308083 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:13:28.081804  308083 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:13:28.109690  308083 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:13:28.109715  308083 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:13:28.109724  308083 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1025 10:13:28.109840  308083 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-480889 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:13:28.109926  308083 ssh_runner.go:195] Run: crio config
	I1025 10:13:28.181906  308083 cni.go:84] Creating CNI manager for ""
	I1025 10:13:28.181927  308083 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1025 10:13:28.181947  308083 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:13:28.181970  308083 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-480889 NodeName:ha-480889 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:13:28.182120  308083 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-480889"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:13:28.182142  308083 kube-vip.go:115] generating kube-vip config ...
	I1025 10:13:28.182194  308083 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1025 10:13:28.194754  308083 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:13:28.194852  308083 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1025 10:13:28.194915  308083 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:13:28.202716  308083 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:13:28.202791  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1025 10:13:28.211249  308083 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1025 10:13:28.224427  308083 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:13:28.236965  308083 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1025 10:13:28.249237  308083 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1025 10:13:28.261093  308083 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1025 10:13:28.265704  308083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:13:28.275389  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:28.388284  308083 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:13:28.404560  308083 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889 for IP: 192.168.49.2
	I1025 10:13:28.404624  308083 certs.go:195] generating shared ca certs ...
	I1025 10:13:28.404659  308083 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:28.404824  308083 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:13:28.404900  308083 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:13:28.404925  308083 certs.go:257] generating profile certs ...
	I1025 10:13:28.405027  308083 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.key
	I1025 10:13:28.405078  308083 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key.a013837d
	I1025 10:13:28.405107  308083 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt.a013837d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1025 10:13:29.281974  308083 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt.a013837d ...
	I1025 10:13:29.282465  308083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt.a013837d: {Name:mk2ee9cff9ddeca542ff438d607ca92d489e621a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:29.282692  308083 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key.a013837d ...
	I1025 10:13:29.282818  308083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key.a013837d: {Name:mk666a1056a90e3af7ff477b2ecc4f82c52a5311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:29.282987  308083 certs.go:382] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt.a013837d -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt
	I1025 10:13:29.283272  308083 certs.go:386] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key.a013837d -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key
	I1025 10:13:29.283463  308083 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key
	I1025 10:13:29.283498  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 10:13:29.283530  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 10:13:29.283570  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 10:13:29.283605  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 10:13:29.283633  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1025 10:13:29.283680  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1025 10:13:29.283712  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1025 10:13:29.283743  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1025 10:13:29.283826  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:13:29.283879  308083 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:13:29.283905  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:13:29.283959  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:13:29.284007  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:13:29.284066  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:13:29.284138  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:13:29.284221  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> /usr/share/ca-certificates/2612562.pem
	I1025 10:13:29.284263  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:29.284295  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem -> /usr/share/ca-certificates/261256.pem
	I1025 10:13:29.284844  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:13:29.339963  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:13:29.378039  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:13:29.412109  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:13:29.439404  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 10:13:29.471848  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:13:29.495108  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:13:29.521223  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:13:29.555889  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:13:29.583865  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:13:29.607803  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:13:29.660341  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:13:29.687106  308083 ssh_runner.go:195] Run: openssl version
	I1025 10:13:29.696444  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:13:29.707221  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:13:29.717578  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:13:29.717659  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:13:29.790492  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:13:29.802381  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:13:29.810802  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:29.815111  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:29.815223  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:29.864875  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:13:29.872882  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:13:29.882139  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:13:29.887141  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:13:29.887254  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:13:29.933083  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:13:29.942393  308083 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:13:29.946745  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:13:29.992960  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:13:30.044394  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:13:30.092620  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:13:30.151671  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:13:30.195276  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:13:30.238904  308083 kubeadm.go:400] StartCluster: {Name:ha-480889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:13:30.239101  308083 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:13:30.239204  308083 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:13:30.304407  308083 cri.go:89] found id: "07e7673199f69cfda9e91af2a66aad345a2ce7a92130398dd12fc4e17470e088"
	I1025 10:13:30.304479  308083 cri.go:89] found id: "9e3b516f6f15caae43bda25f85832b5ad9a201e6c7b833a1ba0ec9db87f687fd"
	I1025 10:13:30.304499  308083 cri.go:89] found id: "0b2d139004d5afcec6c5e7f18831bff8c069ba521b289758825ffdd6fd892697"
	I1025 10:13:30.304523  308083 cri.go:89] found id: "322c2cc726dbd336dc6d64af52ed0d7374e34249ef33e160f4bc633c2590c50d"
	I1025 10:13:30.304554  308083 cri.go:89] found id: "170a3a9364b5079051bd3c5c594733a45ac4ddd6193638cc413453308f5c0fac"
	I1025 10:13:30.304578  308083 cri.go:89] found id: ""
	I1025 10:13:30.304661  308083 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:13:30.328956  308083 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:13:30Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:13:30.329101  308083 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:13:30.340608  308083 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:13:30.340681  308083 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:13:30.340762  308083 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:13:30.351736  308083 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:13:30.352209  308083 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-480889" does not appear in /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:13:30.352379  308083 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-259409/kubeconfig needs updating (will repair): [kubeconfig missing "ha-480889" cluster setting kubeconfig missing "ha-480889" context setting]
	I1025 10:13:30.352687  308083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:30.353275  308083 kapi.go:59] client config for ha-480889: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.key", CAFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 10:13:30.354022  308083 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1025 10:13:30.354112  308083 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1025 10:13:30.354147  308083 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 10:13:30.354173  308083 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1025 10:13:30.354194  308083 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1025 10:13:30.354220  308083 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 10:13:30.354596  308083 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:13:30.369232  308083 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1025 10:13:30.369295  308083 kubeadm.go:601] duration metric: took 28.594078ms to restartPrimaryControlPlane
	I1025 10:13:30.369334  308083 kubeadm.go:402] duration metric: took 130.438978ms to StartCluster
	I1025 10:13:30.369370  308083 settings.go:142] acquiring lock: {Name:mk78071aea90144fb2e8f63b90de4792d7a3522a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:30.369458  308083 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:13:30.370118  308083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:30.370359  308083 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:13:30.370404  308083 start.go:241] waiting for startup goroutines ...
	I1025 10:13:30.370435  308083 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:13:30.370975  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:30.376476  308083 out.go:179] * Enabled addons: 
	I1025 10:13:30.379493  308083 addons.go:514] duration metric: took 9.050073ms for enable addons: enabled=[]
	I1025 10:13:30.379556  308083 start.go:246] waiting for cluster config update ...
	I1025 10:13:30.379587  308083 start.go:255] writing updated cluster config ...
	I1025 10:13:30.382748  308083 out.go:203] 
	I1025 10:13:30.385876  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:30.386069  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:13:30.389383  308083 out.go:179] * Starting "ha-480889-m02" control-plane node in "ha-480889" cluster
	I1025 10:13:30.392170  308083 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:13:30.395076  308083 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:13:30.397919  308083 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:13:30.397962  308083 cache.go:58] Caching tarball of preloaded images
	I1025 10:13:30.398098  308083 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:13:30.398132  308083 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:13:30.398282  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:13:30.398534  308083 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:13:30.435730  308083 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:13:30.435756  308083 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:13:30.435773  308083 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:13:30.435796  308083 start.go:360] acquireMachinesLock for ha-480889-m02: {Name:mk5fa3d1d910363d3e584c1db68856801d0a168a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:13:30.435853  308083 start.go:364] duration metric: took 36.152µs to acquireMachinesLock for "ha-480889-m02"
	I1025 10:13:30.435879  308083 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:13:30.435886  308083 fix.go:54] fixHost starting: m02
	I1025 10:13:30.436144  308083 cli_runner.go:164] Run: docker container inspect ha-480889-m02 --format={{.State.Status}}
	I1025 10:13:30.486709  308083 fix.go:112] recreateIfNeeded on ha-480889-m02: state=Stopped err=<nil>
	W1025 10:13:30.486741  308083 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:13:30.490037  308083 out.go:252] * Restarting existing docker container for "ha-480889-m02" ...
	I1025 10:13:30.490126  308083 cli_runner.go:164] Run: docker start ha-480889-m02
	I1025 10:13:30.892304  308083 cli_runner.go:164] Run: docker container inspect ha-480889-m02 --format={{.State.Status}}
	I1025 10:13:30.928214  308083 kic.go:430] container "ha-480889-m02" state is running.
	I1025 10:13:30.928591  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m02
	I1025 10:13:30.962308  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:13:30.962572  308083 machine.go:93] provisionDockerMachine start ...
	I1025 10:13:30.962636  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:30.991814  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:30.992103  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1025 10:13:30.992112  308083 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:13:30.992798  308083 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53254->127.0.0.1:33178: read: connection reset by peer
	I1025 10:13:34.218384  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-480889-m02
	
	I1025 10:13:34.218468  308083 ubuntu.go:182] provisioning hostname "ha-480889-m02"
	I1025 10:13:34.218568  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:34.242087  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:34.242402  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1025 10:13:34.242413  308083 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-480889-m02 && echo "ha-480889-m02" | sudo tee /etc/hostname
	I1025 10:13:34.553498  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-480889-m02
	
	I1025 10:13:34.553579  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:34.605778  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:34.606154  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1025 10:13:34.606179  308083 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-480889-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-480889-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-480889-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:13:34.786380  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:13:34.786405  308083 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:13:34.786423  308083 ubuntu.go:190] setting up certificates
	I1025 10:13:34.786433  308083 provision.go:84] configureAuth start
	I1025 10:13:34.786494  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m02
	I1025 10:13:34.812196  308083 provision.go:143] copyHostCerts
	I1025 10:13:34.812238  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:13:34.812271  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:13:34.812277  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:13:34.812354  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:13:34.812427  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:13:34.812443  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:13:34.812448  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:13:34.812473  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:13:34.812508  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:13:34.812524  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:13:34.812528  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:13:34.812550  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:13:34.812594  308083 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.ha-480889-m02 san=[127.0.0.1 192.168.49.3 ha-480889-m02 localhost minikube]
	I1025 10:13:35.433499  308083 provision.go:177] copyRemoteCerts
	I1025 10:13:35.437355  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:13:35.437432  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:35.478086  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m02/id_rsa Username:docker}
	I1025 10:13:35.600269  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 10:13:35.600335  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:13:35.625245  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 10:13:35.625308  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 10:13:35.656095  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 10:13:35.656153  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:13:35.702462  308083 provision.go:87] duration metric: took 916.014065ms to configureAuth
	I1025 10:13:35.702539  308083 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:13:35.702849  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:35.703008  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:35.743726  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:35.744035  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1025 10:13:35.744050  308083 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:13:36.131741  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:13:36.131816  308083 machine.go:96] duration metric: took 5.16923304s to provisionDockerMachine
	I1025 10:13:36.131850  308083 start.go:293] postStartSetup for "ha-480889-m02" (driver="docker")
	I1025 10:13:36.131900  308083 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:13:36.132016  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:13:36.132089  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:36.151273  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m02/id_rsa Username:docker}
	I1025 10:13:36.257973  308083 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:13:36.261457  308083 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:13:36.261487  308083 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:13:36.261499  308083 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:13:36.261552  308083 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:13:36.261635  308083 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:13:36.261648  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> /etc/ssl/certs/2612562.pem
	I1025 10:13:36.261749  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:13:36.269152  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:13:36.286996  308083 start.go:296] duration metric: took 155.094351ms for postStartSetup
	I1025 10:13:36.287074  308083 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:13:36.287145  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:36.305008  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m02/id_rsa Username:docker}
	I1025 10:13:36.411951  308083 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:13:36.420078  308083 fix.go:56] duration metric: took 5.984184266s for fixHost
	I1025 10:13:36.420100  308083 start.go:83] releasing machines lock for "ha-480889-m02", held for 5.984233964s
	I1025 10:13:36.420167  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m02
	I1025 10:13:36.443663  308083 out.go:179] * Found network options:
	I1025 10:13:36.446961  308083 out.go:179]   - NO_PROXY=192.168.49.2
	W1025 10:13:36.450808  308083 proxy.go:120] fail to check proxy env: Error ip not in block
	W1025 10:13:36.450851  308083 proxy.go:120] fail to check proxy env: Error ip not in block
	I1025 10:13:36.450943  308083 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:13:36.450993  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:36.451266  308083 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:13:36.451340  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:36.496453  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m02/id_rsa Username:docker}
	I1025 10:13:36.500270  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m02/id_rsa Username:docker}
	I1025 10:13:36.756746  308083 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:13:36.868709  308083 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:13:36.868786  308083 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:13:36.881721  308083 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:13:36.881748  308083 start.go:495] detecting cgroup driver to use...
	I1025 10:13:36.881782  308083 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:13:36.881843  308083 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:13:36.907834  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:13:36.928826  308083 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:13:36.928911  308083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:13:36.951297  308083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:13:36.978500  308083 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:13:37.180812  308083 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:13:37.373723  308083 docker.go:234] disabling docker service ...
	I1025 10:13:37.373791  308083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:13:37.390746  308083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:13:37.405594  308083 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:13:37.625534  308083 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:13:37.834157  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:13:37.849602  308083 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:13:37.879998  308083 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:13:37.880065  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.894893  308083 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:13:37.894974  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.912955  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.922956  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.937706  308083 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:13:37.948806  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.959464  308083 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.972181  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.983464  308083 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:13:38.003743  308083 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:13:38.037815  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:38.334072  308083 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:13:39.163742  308083 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:13:39.163831  308083 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:13:39.169004  308083 start.go:563] Will wait 60s for crictl version
	I1025 10:13:39.169072  308083 ssh_runner.go:195] Run: which crictl
	I1025 10:13:39.173735  308083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:13:39.204784  308083 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:13:39.204890  308083 ssh_runner.go:195] Run: crio --version
	I1025 10:13:39.239278  308083 ssh_runner.go:195] Run: crio --version
	I1025 10:13:39.276711  308083 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:13:39.279715  308083 out.go:179]   - env NO_PROXY=192.168.49.2
	I1025 10:13:39.282816  308083 cli_runner.go:164] Run: docker network inspect ha-480889 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:13:39.299629  308083 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 10:13:39.303856  308083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:13:39.314044  308083 mustload.go:65] Loading cluster: ha-480889
	I1025 10:13:39.314294  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:39.314598  308083 cli_runner.go:164] Run: docker container inspect ha-480889 --format={{.State.Status}}
	I1025 10:13:39.343892  308083 host.go:66] Checking if "ha-480889" exists ...
	I1025 10:13:39.344182  308083 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889 for IP: 192.168.49.3
	I1025 10:13:39.344197  308083 certs.go:195] generating shared ca certs ...
	I1025 10:13:39.344211  308083 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:39.344335  308083 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:13:39.344393  308083 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:13:39.344406  308083 certs.go:257] generating profile certs ...
	I1025 10:13:39.344480  308083 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.key
	I1025 10:13:39.344547  308083 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key.1eaed255
	I1025 10:13:39.344593  308083 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key
	I1025 10:13:39.344606  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 10:13:39.344620  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 10:13:39.344636  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 10:13:39.344647  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 10:13:39.344663  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1025 10:13:39.344687  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1025 10:13:39.344718  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1025 10:13:39.344732  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1025 10:13:39.344792  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:13:39.344825  308083 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:13:39.344838  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:13:39.344861  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:13:39.344888  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:13:39.344914  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:13:39.344981  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:13:39.345016  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> /usr/share/ca-certificates/2612562.pem
	I1025 10:13:39.345034  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:39.345045  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem -> /usr/share/ca-certificates/261256.pem
	I1025 10:13:39.345112  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:39.371934  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:13:39.470344  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1025 10:13:39.483516  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1025 10:13:39.501845  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1025 10:13:39.507200  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1025 10:13:39.527252  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1025 10:13:39.532933  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1025 10:13:39.549399  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1025 10:13:39.554586  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1025 10:13:39.570659  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1025 10:13:39.574962  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1025 10:13:39.584673  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1025 10:13:39.589172  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1025 10:13:39.598913  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:13:39.620680  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:13:39.644461  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:13:39.668589  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:13:39.692311  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 10:13:39.712807  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:13:39.739124  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:13:39.767676  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:13:39.790850  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:13:39.811105  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:13:39.833707  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:13:39.856043  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1025 10:13:39.869628  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1025 10:13:39.883404  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1025 10:13:39.897013  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1025 10:13:39.919485  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1025 10:13:39.945523  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1025 10:13:39.967210  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1025 10:13:39.994983  308083 ssh_runner.go:195] Run: openssl version
	I1025 10:13:40.002778  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:13:40.017144  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:13:40.022850  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:13:40.022982  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:13:40.073080  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:13:40.081683  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:13:40.090847  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:13:40.096142  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:13:40.096266  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:13:40.138985  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:13:40.147554  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:13:40.156382  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:40.161029  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:40.161195  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:40.202792  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:13:40.211314  308083 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:13:40.215961  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:13:40.258002  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:13:40.301047  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:13:40.349624  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:13:40.395242  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:13:40.444494  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:13:40.496874  308083 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1025 10:13:40.496975  308083 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-480889-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:13:40.497007  308083 kube-vip.go:115] generating kube-vip config ...
	I1025 10:13:40.497062  308083 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1025 10:13:40.539654  308083 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:13:40.539717  308083 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1025 10:13:40.539780  308083 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:13:40.558469  308083 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:13:40.558603  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1025 10:13:40.566867  308083 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1025 10:13:40.583436  308083 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:13:40.596901  308083 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1025 10:13:40.612066  308083 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1025 10:13:40.616047  308083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:13:40.627164  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:40.770079  308083 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:13:40.784212  308083 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:13:40.784687  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:40.790656  308083 out.go:179] * Verifying Kubernetes components...
	I1025 10:13:40.793379  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:40.919442  308083 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:13:40.934315  308083 kapi.go:59] client config for ha-480889: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.key", CAFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1025 10:13:40.934388  308083 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1025 10:13:40.936607  308083 node_ready.go:35] waiting up to 6m0s for node "ha-480889-m02" to be "Ready" ...
	I1025 10:14:03.978798  308083 node_ready.go:49] node "ha-480889-m02" is "Ready"
	I1025 10:14:03.978827  308083 node_ready.go:38] duration metric: took 23.042187504s for node "ha-480889-m02" to be "Ready" ...
	I1025 10:14:03.978841  308083 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:14:03.978901  308083 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:14:04.002008  308083 api_server.go:72] duration metric: took 23.217688145s to wait for apiserver process to appear ...
	I1025 10:14:04.002035  308083 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:14:04.002057  308083 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 10:14:04.065805  308083 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:14:04.065839  308083 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:14:04.502158  308083 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 10:14:04.511711  308083 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:14:04.511802  308083 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:14:05.002194  308083 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 10:14:05.013361  308083 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:14:05.013506  308083 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:14:05.503134  308083 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 10:14:05.514732  308083 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1025 10:14:05.518544  308083 api_server.go:141] control plane version: v1.34.1
	I1025 10:14:05.518622  308083 api_server.go:131] duration metric: took 1.516578961s to wait for apiserver health ...
	I1025 10:14:05.518646  308083 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:14:05.535848  308083 system_pods.go:59] 26 kube-system pods found
	I1025 10:14:05.535941  308083 system_pods.go:61] "coredns-66bc5c9577-ctnsn" [4c76c01c-15ed-4930-ac1a-1e2bf7de3961] Running
	I1025 10:14:05.535963  308083 system_pods.go:61] "coredns-66bc5c9577-h4lrc" [ade89685-c5d2-4e4e-847d-7af6cb3fb862] Running
	I1025 10:14:05.535986  308083 system_pods.go:61] "etcd-ha-480889" [e343e174-731b-4eb7-97df-0220f254bfcf] Running
	I1025 10:14:05.536032  308083 system_pods.go:61] "etcd-ha-480889-m02" [52f56789-d8bf-4251-9316-a0b572f65125] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:14:05.536059  308083 system_pods.go:61] "etcd-ha-480889-m03" [7fb90646-4b60-4cc2-a527-c7e563bb182b] Running
	I1025 10:14:05.536100  308083 system_pods.go:61] "kindnet-227ts" [c2c62be9-5d6e-4a43-9eff-9a7e220282d7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 10:14:05.536125  308083 system_pods.go:61] "kindnet-2fqxj" [da4ef885-af3d-4ee3-9151-cdca0253c911] Running
	I1025 10:14:05.536154  308083 system_pods.go:61] "kindnet-8fgmd" [13833b7e-6794-4f30-8bec-20375bd481f2] Running
	I1025 10:14:05.536192  308083 system_pods.go:61] "kindnet-92p8z" [c1f4d260-381c-42d8-a8a5-77ae60cf42c6] Running
	I1025 10:14:05.536214  308083 system_pods.go:61] "kube-apiserver-ha-480889" [3f293b6b-7247-48a0-aa80-508696bea727] Running
	I1025 10:14:05.536251  308083 system_pods.go:61] "kube-apiserver-ha-480889-m02" [faae5baa-e581-4254-b659-0687cfebfb67] Running
	I1025 10:14:05.536276  308083 system_pods.go:61] "kube-apiserver-ha-480889-m03" [f18f8a4d-22bd-48e4-9b23-e5383f2fce25] Running
	I1025 10:14:05.536299  308083 system_pods.go:61] "kube-controller-manager-ha-480889" [6c111362-d576-4cb0-b102-086f180ff7b7] Running
	I1025 10:14:05.536340  308083 system_pods.go:61] "kube-controller-manager-ha-480889-m02" [443192d3-d7a3-40c4-99bf-2a1eac354f88] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:14:05.536367  308083 system_pods.go:61] "kube-controller-manager-ha-480889-m03" [c5d29ad2-f161-4c39-9de4-35916c43e02b] Running
	I1025 10:14:05.536392  308083 system_pods.go:61] "kube-proxy-29hlq" [2c0b691f-c26f-49bd-9b8b-39819ca8539d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 10:14:05.536425  308083 system_pods.go:61] "kube-proxy-4d5ks" [058d38d9-4dec-40ff-ac68-9651d27ba0c6] Running
	I1025 10:14:05.536449  308083 system_pods.go:61] "kube-proxy-6x5rb" [e73b3f75-02d7-46e3-940c-ffd727e4c87d] Running
	I1025 10:14:05.536471  308083 system_pods.go:61] "kube-proxy-9rtcs" [6fd17399-e636-4de6-aa9c-e0e3d3656c41] Running
	I1025 10:14:05.536506  308083 system_pods.go:61] "kube-scheduler-ha-480889" [9036810d-dce1-4542-ac53-b5d70020809c] Running
	I1025 10:14:05.536532  308083 system_pods.go:61] "kube-scheduler-ha-480889-m02" [f4c7c190-55e0-4bbf-9c22-fe9b3d8fc98d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:14:05.536556  308083 system_pods.go:61] "kube-scheduler-ha-480889-m03" [fdcb0331-d8b0-4fb0-9549-459e365b5863] Running
	I1025 10:14:05.536591  308083 system_pods.go:61] "kube-vip-ha-480889" [07959933-b7f0-46ad-9fa2-d9c661db7882] Running
	I1025 10:14:05.536614  308083 system_pods.go:61] "kube-vip-ha-480889-m02" [fea939ce-de9c-446b-b961-37a72c945913] Running
	I1025 10:14:05.536639  308083 system_pods.go:61] "kube-vip-ha-480889-m03" [f2a5dbed-19e6-4092-8340-c798578dfd40] Running
	I1025 10:14:05.536679  308083 system_pods.go:61] "storage-provisioner" [15113825-bb63-434f-bd5e-2ffd789452d6] Running
	I1025 10:14:05.536705  308083 system_pods.go:74] duration metric: took 18.038599ms to wait for pod list to return data ...
	I1025 10:14:05.536727  308083 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:14:05.551153  308083 default_sa.go:45] found service account: "default"
	I1025 10:14:05.551231  308083 default_sa.go:55] duration metric: took 14.469512ms for default service account to be created ...
	I1025 10:14:05.551256  308083 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:14:05.562144  308083 system_pods.go:86] 26 kube-system pods found
	I1025 10:14:05.562232  308083 system_pods.go:89] "coredns-66bc5c9577-ctnsn" [4c76c01c-15ed-4930-ac1a-1e2bf7de3961] Running
	I1025 10:14:05.562257  308083 system_pods.go:89] "coredns-66bc5c9577-h4lrc" [ade89685-c5d2-4e4e-847d-7af6cb3fb862] Running
	I1025 10:14:05.562298  308083 system_pods.go:89] "etcd-ha-480889" [e343e174-731b-4eb7-97df-0220f254bfcf] Running
	I1025 10:14:05.562329  308083 system_pods.go:89] "etcd-ha-480889-m02" [52f56789-d8bf-4251-9316-a0b572f65125] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:14:05.562357  308083 system_pods.go:89] "etcd-ha-480889-m03" [7fb90646-4b60-4cc2-a527-c7e563bb182b] Running
	I1025 10:14:05.562400  308083 system_pods.go:89] "kindnet-227ts" [c2c62be9-5d6e-4a43-9eff-9a7e220282d7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 10:14:05.562424  308083 system_pods.go:89] "kindnet-2fqxj" [da4ef885-af3d-4ee3-9151-cdca0253c911] Running
	I1025 10:14:05.562452  308083 system_pods.go:89] "kindnet-8fgmd" [13833b7e-6794-4f30-8bec-20375bd481f2] Running
	I1025 10:14:05.562486  308083 system_pods.go:89] "kindnet-92p8z" [c1f4d260-381c-42d8-a8a5-77ae60cf42c6] Running
	I1025 10:14:05.562513  308083 system_pods.go:89] "kube-apiserver-ha-480889" [3f293b6b-7247-48a0-aa80-508696bea727] Running
	I1025 10:14:05.562563  308083 system_pods.go:89] "kube-apiserver-ha-480889-m02" [faae5baa-e581-4254-b659-0687cfebfb67] Running
	I1025 10:14:05.562590  308083 system_pods.go:89] "kube-apiserver-ha-480889-m03" [f18f8a4d-22bd-48e4-9b23-e5383f2fce25] Running
	I1025 10:14:05.562616  308083 system_pods.go:89] "kube-controller-manager-ha-480889" [6c111362-d576-4cb0-b102-086f180ff7b7] Running
	I1025 10:14:05.562658  308083 system_pods.go:89] "kube-controller-manager-ha-480889-m02" [443192d3-d7a3-40c4-99bf-2a1eac354f88] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:14:05.562685  308083 system_pods.go:89] "kube-controller-manager-ha-480889-m03" [c5d29ad2-f161-4c39-9de4-35916c43e02b] Running
	I1025 10:14:05.562729  308083 system_pods.go:89] "kube-proxy-29hlq" [2c0b691f-c26f-49bd-9b8b-39819ca8539d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 10:14:05.562755  308083 system_pods.go:89] "kube-proxy-4d5ks" [058d38d9-4dec-40ff-ac68-9651d27ba0c6] Running
	I1025 10:14:05.562843  308083 system_pods.go:89] "kube-proxy-6x5rb" [e73b3f75-02d7-46e3-940c-ffd727e4c87d] Running
	I1025 10:14:05.562883  308083 system_pods.go:89] "kube-proxy-9rtcs" [6fd17399-e636-4de6-aa9c-e0e3d3656c41] Running
	I1025 10:14:05.562903  308083 system_pods.go:89] "kube-scheduler-ha-480889" [9036810d-dce1-4542-ac53-b5d70020809c] Running
	I1025 10:14:05.562928  308083 system_pods.go:89] "kube-scheduler-ha-480889-m02" [f4c7c190-55e0-4bbf-9c22-fe9b3d8fc98d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:14:05.562965  308083 system_pods.go:89] "kube-scheduler-ha-480889-m03" [fdcb0331-d8b0-4fb0-9549-459e365b5863] Running
	I1025 10:14:05.562991  308083 system_pods.go:89] "kube-vip-ha-480889" [07959933-b7f0-46ad-9fa2-d9c661db7882] Running
	I1025 10:14:05.563016  308083 system_pods.go:89] "kube-vip-ha-480889-m02" [fea939ce-de9c-446b-b961-37a72c945913] Running
	I1025 10:14:05.563070  308083 system_pods.go:89] "kube-vip-ha-480889-m03" [f2a5dbed-19e6-4092-8340-c798578dfd40] Running
	I1025 10:14:05.563096  308083 system_pods.go:89] "storage-provisioner" [15113825-bb63-434f-bd5e-2ffd789452d6] Running
	I1025 10:14:05.563122  308083 system_pods.go:126] duration metric: took 11.844458ms to wait for k8s-apps to be running ...
	I1025 10:14:05.563161  308083 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:14:05.563251  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:14:05.583878  308083 system_svc.go:56] duration metric: took 20.700093ms WaitForService to wait for kubelet
	I1025 10:14:05.583959  308083 kubeadm.go:586] duration metric: took 24.799662385s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:14:05.584013  308083 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:14:05.602014  308083 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:14:05.602101  308083 node_conditions.go:123] node cpu capacity is 2
	I1025 10:14:05.602129  308083 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:14:05.602149  308083 node_conditions.go:123] node cpu capacity is 2
	I1025 10:14:05.602183  308083 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:14:05.602208  308083 node_conditions.go:123] node cpu capacity is 2
	I1025 10:14:05.602232  308083 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:14:05.602268  308083 node_conditions.go:123] node cpu capacity is 2
	I1025 10:14:05.602294  308083 node_conditions.go:105] duration metric: took 18.245402ms to run NodePressure ...
	I1025 10:14:05.602322  308083 start.go:241] waiting for startup goroutines ...
	I1025 10:14:05.602372  308083 start.go:255] writing updated cluster config ...
	I1025 10:14:05.606107  308083 out.go:203] 
	I1025 10:14:05.609375  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:14:05.609570  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:14:05.612923  308083 out.go:179] * Starting "ha-480889-m03" control-plane node in "ha-480889" cluster
	I1025 10:14:05.616650  308083 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:14:05.619578  308083 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:14:05.622647  308083 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:14:05.622723  308083 cache.go:58] Caching tarball of preloaded images
	I1025 10:14:05.622730  308083 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:14:05.622888  308083 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:14:05.622906  308083 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:14:05.623058  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:14:05.644689  308083 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:14:05.644714  308083 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:14:05.644728  308083 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:14:05.644760  308083 start.go:360] acquireMachinesLock for ha-480889-m03: {Name:mkdc7aead07cc61c4483ca641c0f901f32cc9e0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:14:05.644832  308083 start.go:364] duration metric: took 40.6µs to acquireMachinesLock for "ha-480889-m03"
	I1025 10:14:05.644859  308083 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:14:05.644869  308083 fix.go:54] fixHost starting: m03
	I1025 10:14:05.645136  308083 cli_runner.go:164] Run: docker container inspect ha-480889-m03 --format={{.State.Status}}
	I1025 10:14:05.665455  308083 fix.go:112] recreateIfNeeded on ha-480889-m03: state=Stopped err=<nil>
	W1025 10:14:05.665482  308083 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:14:05.668964  308083 out.go:252] * Restarting existing docker container for "ha-480889-m03" ...
	I1025 10:14:05.669067  308083 cli_runner.go:164] Run: docker start ha-480889-m03
	I1025 10:14:06.010869  308083 cli_runner.go:164] Run: docker container inspect ha-480889-m03 --format={{.State.Status}}
	I1025 10:14:06.033631  308083 kic.go:430] container "ha-480889-m03" state is running.
	I1025 10:14:06.034025  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m03
	I1025 10:14:06.062398  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:14:06.062842  308083 machine.go:93] provisionDockerMachine start ...
	I1025 10:14:06.062924  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:06.096711  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:14:06.097013  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1025 10:14:06.097022  308083 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:14:06.100286  308083 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44394->127.0.0.1:33183: read: connection reset by peer
	I1025 10:14:09.422447  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-480889-m03
	
	I1025 10:14:09.422528  308083 ubuntu.go:182] provisioning hostname "ha-480889-m03"
	I1025 10:14:09.422611  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:09.454682  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:14:09.454994  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1025 10:14:09.455005  308083 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-480889-m03 && echo "ha-480889-m03" | sudo tee /etc/hostname
	I1025 10:14:09.716055  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-480889-m03
	
	I1025 10:14:09.716202  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:09.758198  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:14:09.758502  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1025 10:14:09.758518  308083 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-480889-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-480889-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-480889-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:14:09.952740  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:14:09.952771  308083 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:14:09.952843  308083 ubuntu.go:190] setting up certificates
	I1025 10:14:09.952854  308083 provision.go:84] configureAuth start
	I1025 10:14:09.952966  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m03
	I1025 10:14:10.002091  308083 provision.go:143] copyHostCerts
	I1025 10:14:10.002146  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:14:10.002194  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:14:10.002207  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:14:10.002336  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:14:10.002445  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:14:10.002473  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:14:10.002482  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:14:10.002512  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:14:10.002620  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:14:10.002645  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:14:10.002656  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:14:10.002686  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:14:10.002748  308083 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.ha-480889-m03 san=[127.0.0.1 192.168.49.4 ha-480889-m03 localhost minikube]
	I1025 10:14:10.250973  308083 provision.go:177] copyRemoteCerts
	I1025 10:14:10.251332  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:14:10.251408  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:10.289237  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m03/id_rsa Username:docker}
	I1025 10:14:10.436731  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 10:14:10.436797  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 10:14:10.544747  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 10:14:10.544817  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:14:10.630377  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 10:14:10.630464  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:14:10.673862  308083 provision.go:87] duration metric: took 720.988399ms to configureAuth
	I1025 10:14:10.673890  308083 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:14:10.674168  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:14:10.674521  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:10.707641  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:14:10.707938  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1025 10:14:10.707957  308083 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:14:11.154845  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:14:11.154927  308083 machine.go:96] duration metric: took 5.092069874s to provisionDockerMachine
	I1025 10:14:11.154954  308083 start.go:293] postStartSetup for "ha-480889-m03" (driver="docker")
	I1025 10:14:11.154994  308083 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:14:11.155090  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:14:11.155169  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:11.175592  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m03/id_rsa Username:docker}
	I1025 10:14:11.283365  308083 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:14:11.286806  308083 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:14:11.286877  308083 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:14:11.286905  308083 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:14:11.286994  308083 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:14:11.287123  308083 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:14:11.287171  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> /etc/ssl/certs/2612562.pem
	I1025 10:14:11.287295  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:14:11.295059  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:14:11.316093  308083 start.go:296] duration metric: took 161.095107ms for postStartSetup
	I1025 10:14:11.316217  308083 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:14:11.316276  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:11.333862  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m03/id_rsa Username:docker}
	I1025 10:14:11.435204  308083 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:14:11.440180  308083 fix.go:56] duration metric: took 5.79530454s for fixHost
	I1025 10:14:11.440241  308083 start.go:83] releasing machines lock for "ha-480889-m03", held for 5.795361279s
	I1025 10:14:11.440311  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m03
	I1025 10:14:11.464304  308083 out.go:179] * Found network options:
	I1025 10:14:11.467314  308083 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1025 10:14:11.470389  308083 proxy.go:120] fail to check proxy env: Error ip not in block
	W1025 10:14:11.470430  308083 proxy.go:120] fail to check proxy env: Error ip not in block
	W1025 10:14:11.470457  308083 proxy.go:120] fail to check proxy env: Error ip not in block
	W1025 10:14:11.470474  308083 proxy.go:120] fail to check proxy env: Error ip not in block
	I1025 10:14:11.470546  308083 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:14:11.470610  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:11.470888  308083 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:14:11.470954  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:11.492648  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m03/id_rsa Username:docker}
	I1025 10:14:11.500283  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m03/id_rsa Username:docker}
	I1025 10:14:11.796571  308083 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:14:11.919974  308083 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:14:11.920047  308083 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:14:11.930959  308083 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:14:11.931034  308083 start.go:495] detecting cgroup driver to use...
	I1025 10:14:11.931084  308083 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:14:11.931150  308083 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:14:11.976106  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:14:12.014574  308083 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:14:12.014688  308083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:14:12.063668  308083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:14:12.091979  308083 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:14:12.314959  308083 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:14:12.575887  308083 docker.go:234] disabling docker service ...
	I1025 10:14:12.575989  308083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:14:12.601545  308083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:14:12.619323  308083 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:14:12.867377  308083 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:14:13.108726  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:14:13.127994  308083 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:14:13.145943  308083 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:14:13.146033  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.156671  308083 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:14:13.156750  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.168655  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.184089  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.194894  308083 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:14:13.204315  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.214077  308083 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.224397  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.234566  308083 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:14:13.243678  308083 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:14:13.253013  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:14:13.493138  308083 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:15:43.813681  308083 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.320502184s)
	I1025 10:15:43.813712  308083 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:15:43.813771  308083 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:15:43.818284  308083 start.go:563] Will wait 60s for crictl version
	I1025 10:15:43.818348  308083 ssh_runner.go:195] Run: which crictl
	I1025 10:15:43.822612  308083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:15:43.849591  308083 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:15:43.849679  308083 ssh_runner.go:195] Run: crio --version
	I1025 10:15:43.881155  308083 ssh_runner.go:195] Run: crio --version
	I1025 10:15:43.916090  308083 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:15:43.919321  308083 out.go:179]   - env NO_PROXY=192.168.49.2
	I1025 10:15:43.922326  308083 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1025 10:15:43.925259  308083 cli_runner.go:164] Run: docker network inspect ha-480889 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:15:43.954223  308083 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 10:15:43.958732  308083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:15:43.969465  308083 mustload.go:65] Loading cluster: ha-480889
	I1025 10:15:43.969714  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:15:43.969954  308083 cli_runner.go:164] Run: docker container inspect ha-480889 --format={{.State.Status}}
	I1025 10:15:43.987361  308083 host.go:66] Checking if "ha-480889" exists ...
	I1025 10:15:43.987646  308083 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889 for IP: 192.168.49.4
	I1025 10:15:43.987660  308083 certs.go:195] generating shared ca certs ...
	I1025 10:15:43.987675  308083 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:15:43.987792  308083 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:15:43.987838  308083 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:15:43.987850  308083 certs.go:257] generating profile certs ...
	I1025 10:15:43.987924  308083 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.key
	I1025 10:15:43.987987  308083 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key.7d4a26e1
	I1025 10:15:43.988022  308083 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key
	I1025 10:15:43.988030  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 10:15:43.988044  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 10:15:43.988056  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 10:15:43.988066  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 10:15:43.988076  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1025 10:15:43.988088  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1025 10:15:43.988099  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1025 10:15:43.988111  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1025 10:15:43.988160  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:15:43.988188  308083 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:15:43.988197  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:15:43.988222  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:15:43.988244  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:15:43.988266  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:15:43.988306  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:15:43.988330  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> /usr/share/ca-certificates/2612562.pem
	I1025 10:15:43.988342  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:15:43.988353  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem -> /usr/share/ca-certificates/261256.pem
	I1025 10:15:43.988408  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:15:44.012522  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:15:44.114325  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1025 10:15:44.118993  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1025 10:15:44.127630  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1025 10:15:44.131303  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1025 10:15:44.140046  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1025 10:15:44.144492  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1025 10:15:44.154086  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1025 10:15:44.158181  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1025 10:15:44.167518  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1025 10:15:44.171723  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1025 10:15:44.181427  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1025 10:15:44.185332  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1025 10:15:44.194266  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:15:44.214098  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:15:44.234054  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:15:44.256195  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:15:44.279031  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 10:15:44.299344  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:15:44.323793  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:15:44.345417  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:15:44.365719  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:15:44.388245  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:15:44.408144  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:15:44.428098  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1025 10:15:44.441938  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1025 10:15:44.457102  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1025 10:15:44.471357  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1025 10:15:44.485615  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1025 10:15:44.498465  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1025 10:15:44.511910  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1025 10:15:44.531258  308083 ssh_runner.go:195] Run: openssl version
	I1025 10:15:44.540606  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:15:44.550354  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:15:44.554246  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:15:44.554361  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:15:44.602272  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:15:44.611902  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:15:44.622835  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:15:44.629226  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:15:44.629299  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:15:44.670883  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:15:44.679524  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:15:44.689802  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:15:44.693893  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:15:44.694068  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:15:44.735651  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:15:44.743736  308083 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:15:44.747896  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:15:44.790110  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:15:44.832406  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:15:44.874662  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:15:44.915849  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:15:44.959092  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:15:45.002430  308083 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1025 10:15:45.002579  308083 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-480889-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:15:45.002609  308083 kube-vip.go:115] generating kube-vip config ...
	I1025 10:15:45.002683  308083 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1025 10:15:45.029854  308083 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:15:45.029925  308083 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1025 10:15:45.030057  308083 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:15:45.063539  308083 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:15:45.063684  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1025 10:15:45.095087  308083 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1025 10:15:45.131847  308083 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:15:45.152140  308083 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1025 10:15:45.177067  308083 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1025 10:15:45.183642  308083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:15:45.224794  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:15:45.476283  308083 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:15:45.492420  308083 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:15:45.492955  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:15:45.496345  308083 out.go:179] * Verifying Kubernetes components...
	I1025 10:15:45.499247  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:15:45.679197  308083 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:15:45.698347  308083 kapi.go:59] client config for ha-480889: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.key", CAFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1025 10:15:45.698425  308083 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1025 10:15:45.698682  308083 node_ready.go:35] waiting up to 6m0s for node "ha-480889-m03" to be "Ready" ...
	W1025 10:15:47.704097  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:15:50.202392  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:15:52.202756  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:15:54.702519  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:15:56.703376  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:15:59.202538  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:01.203022  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:03.702456  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:05.702876  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:08.203621  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:10.702751  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:12.702907  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:14.703027  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:17.202640  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:19.702153  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:21.702531  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:24.202537  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:26.203031  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:28.703368  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:31.203812  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:33.702780  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:35.702906  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:38.202338  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:40.203167  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:42.702490  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:44.702835  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:47.202196  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:49.202526  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:51.202870  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:53.702683  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:55.703197  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:58.202338  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:00.206336  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:02.702377  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:04.702956  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:06.703174  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:09.203441  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:11.702806  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:13.710569  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:16.202672  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:18.702234  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:20.702880  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:23.202095  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:25.702246  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:28.202837  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:30.702454  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:33.202247  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:35.202785  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:37.203762  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:39.204260  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:41.702127  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:43.702287  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:45.703093  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:48.201849  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:50.202459  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:52.702854  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:55.202331  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:57.203185  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:59.702528  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:01.703331  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:04.202726  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:06.703053  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:09.204373  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:11.701897  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:13.702021  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:15.702198  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:17.702881  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:19.703383  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:22.202517  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:24.702492  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:26.702790  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:28.703087  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:30.703165  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:33.201850  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:35.202988  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:37.702399  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:40.203419  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:42.706153  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:45.204630  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:47.702150  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:49.702926  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:52.202337  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:54.202520  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:56.205653  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:58.703493  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:01.202899  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:03.703116  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:06.202631  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:08.702752  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:11.202319  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:13.202983  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:15.702637  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:17.702727  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:19.703012  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:22.202503  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:24.202611  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:26.203025  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:28.703023  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:30.704166  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:33.202334  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:35.202526  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:37.702970  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:40.202164  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:42.209403  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:44.702793  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:47.202837  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:49.203091  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:51.702900  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:54.202265  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:56.202768  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:58.701910  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:00.709849  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:03.202825  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:05.202888  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:07.203134  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:09.702506  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:11.702866  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:13.703432  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:16.203193  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:18.702327  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:21.202730  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:23.701909  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:25.702331  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:28.203063  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:30.702507  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:33.202600  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:35.701600  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:37.705510  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:40.203102  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:42.203440  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:44.702386  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:46.703047  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:49.202068  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:51.202968  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:53.701922  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:56.202876  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:58.702653  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:00.702696  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:03.202988  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:05.702469  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:08.203875  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:10.702312  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:12.702439  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:15.202672  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:17.203173  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:19.702749  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:22.202730  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:24.211889  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:26.702317  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:28.703052  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:31.202771  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:33.702614  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:35.702649  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:37.702884  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:40.202737  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:42.203329  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:44.702141  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	I1025 10:21:45.699723  308083 node_ready.go:38] duration metric: took 6m0.00101372s for node "ha-480889-m03" to be "Ready" ...
	I1025 10:21:45.702936  308083 out.go:203] 
	W1025 10:21:45.705812  308083 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1025 10:21:45.705837  308083 out.go:285] * 
	W1025 10:21:45.708064  308083 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 10:21:45.711065  308083 out.go:203] 
	
	
	==> CRI-O <==
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.872198639Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=aa627551-b4d5-499e-bdd6-7970bf78bb8e name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.8732321Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=3a445103-3339-479a-b20e-1c8b913f81b0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.873332589Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.879036798Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.879394684Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b711fedb8e2618cc0f4b880fad10f4bf8b29d19e8ac5c5fbc1ffc64bd2f05ae5/merged/etc/passwd: no such file or directory"
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.879492153Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b711fedb8e2618cc0f4b880fad10f4bf8b29d19e8ac5c5fbc1ffc64bd2f05ae5/merged/etc/group: no such file or directory"
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.879805058Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.90911303Z" level=info msg="Created container 259b995f91b9c68705817e45cb74e856232ab4b1d45cae1a557d2406942ace53: kube-system/storage-provisioner/storage-provisioner" id=3a445103-3339-479a-b20e-1c8b913f81b0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.910291953Z" level=info msg="Starting container: 259b995f91b9c68705817e45cb74e856232ab4b1d45cae1a557d2406942ace53" id=376239e4-87b0-44a9-9df0-3c3e5353824a name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.917558356Z" level=info msg="Started container" PID=1402 containerID=259b995f91b9c68705817e45cb74e856232ab4b1d45cae1a557d2406942ace53 description=kube-system/storage-provisioner/storage-provisioner id=376239e4-87b0-44a9-9df0-3c3e5353824a name=/runtime.v1.RuntimeService/StartContainer sandboxID=088d0d7b8bf0c2f621c0ae22566dca0cf1d81367602172bfbbd843248aea9931
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.555322496Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.559161147Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.559205784Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.559228766Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.563121449Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.563158898Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.563183087Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.573678291Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.573872779Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.574376406Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.578374066Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.578414764Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.578445763Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.582936063Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.582996642Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	259b995f91b9c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Running             storage-provisioner       2                   088d0d7b8bf0c       storage-provisioner                 kube-system
	3d17e8c3e629c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   7 minutes ago       Running             kube-controller-manager   3                   8cbe108c8dc1a       kube-controller-manager-ha-480889   kube-system
	8a6f8ac4178b1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   a4778b8bb50e2       coredns-66bc5c9577-h4lrc            kube-system
	2c07e2732f356       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 minutes ago       Running             kube-proxy                1                   221f4b21ed8c2       kube-proxy-6x5rb                    kube-system
	8b6196b876372       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   1                   8039291c91840       busybox-7b57f96db7-wkwwg            default
	15568eac2b869       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   badf118cbd9c1       coredns-66bc5c9577-ctnsn            kube-system
	9e45eacfcf479       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Exited              storage-provisioner       1                   088d0d7b8bf0c       storage-provisioner                 kube-system
	fbcc0424a1c5f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               1                   c3c6117dfa2fc       kindnet-8fgmd                       kube-system
	3d23dbb42715f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   7 minutes ago       Exited              kube-controller-manager   2                   8cbe108c8dc1a       kube-controller-manager-ha-480889   kube-system
	07e7673199f69       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Running             etcd                      1                   f30b7eb202966       etcd-ha-480889                      kube-system
	0b2d139004d5a       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago       Running             kube-vip                  0                   dfd777c7213ec       kube-vip-ha-480889                  kube-system
	322c2cc726dbd       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            1                   b85982ed8ef84       kube-scheduler-ha-480889            kube-system
	170a3a9364b50       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   8 minutes ago       Running             kube-apiserver            1                   f6ee90b1515bb       kube-apiserver-ha-480889            kube-system
	
	
	==> coredns [15568eac2b869838ebb71f6d12525ec66bc41f9aa490cf1a68c490999f19b9d6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56266 - 39256 "HINFO IN 6126263590743240156.8598032974753550859. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030490651s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [8a6f8ac4178b104f0091791bd890925441e209f21434df4df270395089143c26] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56215 - 36101 "HINFO IN 38725101095574367.261866642865519352. udp 54 false 512" NXDOMAIN qr,rd,ra 54 0.011735961s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-480889
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-480889
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=ha-480889
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_07_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:07:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-480889
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:21:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:21:12 +0000   Sat, 25 Oct 2025 10:07:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:21:12 +0000   Sat, 25 Oct 2025 10:07:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:21:12 +0000   Sat, 25 Oct 2025 10:07:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:21:12 +0000   Sat, 25 Oct 2025 10:14:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-480889
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                8216dfdd-af7a-457f-ad51-df588b2f2c14
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-wkwwg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-ctnsn             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 coredns-66bc5c9577-h4lrc             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-ha-480889                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-8fgmd                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-480889             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-480889    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-6x5rb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-480889             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-480889                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m41s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 7m39s                  kube-proxy       
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-480889 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)      kubelet          Node ha-480889 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-480889 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-480889 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-480889 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-480889 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-480889 event: Registered Node ha-480889 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-480889 event: Registered Node ha-480889 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-480889 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-480889 event: Registered Node ha-480889 in Controller
	  Normal   NodeHasSufficientMemory  8m19s (x8 over 8m19s)  kubelet          Node ha-480889 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m19s (x8 over 8m19s)  kubelet          Node ha-480889 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m19s (x8 over 8m19s)  kubelet          Node ha-480889 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m36s                  node-controller  Node ha-480889 event: Registered Node ha-480889 in Controller
	  Normal   RegisteredNode           7m24s                  node-controller  Node ha-480889 event: Registered Node ha-480889 in Controller
	
	
	Name:               ha-480889-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-480889-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=ha-480889
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_25T10_08_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:08:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-480889-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:21:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:20:38 +0000   Sat, 25 Oct 2025 10:08:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:20:38 +0000   Sat, 25 Oct 2025 10:08:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:20:38 +0000   Sat, 25 Oct 2025 10:08:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:20:38 +0000   Sat, 25 Oct 2025 10:09:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-480889-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                ff971242-1f4f-45cd-b767-f92823ae34e7
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-cmlf6                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-480889-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-227ts                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-480889-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-480889-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-29hlq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-480889-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-480889-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m30s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   CIDRAssignmentFailed     13m                    cidrAllocator    Node ha-480889-m02 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           13m                    node-controller  Node ha-480889-m02 event: Registered Node ha-480889-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-480889-m02 event: Registered Node ha-480889-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-480889-m02 event: Registered Node ha-480889-m02 in Controller
	  Normal   NodeHasSufficientPID     9m20s (x8 over 9m20s)  kubelet          Node ha-480889-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 9m20s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m20s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m20s (x8 over 9m20s)  kubelet          Node ha-480889-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m20s (x8 over 9m20s)  kubelet          Node ha-480889-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 8m14s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m14s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m14s (x8 over 8m14s)  kubelet          Node ha-480889-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m14s (x8 over 8m14s)  kubelet          Node ha-480889-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m14s (x8 over 8m14s)  kubelet          Node ha-480889-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m36s                  node-controller  Node ha-480889-m02 event: Registered Node ha-480889-m02 in Controller
	  Normal   RegisteredNode           7m24s                  node-controller  Node ha-480889-m02 event: Registered Node ha-480889-m02 in Controller
	
	
	Name:               ha-480889-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-480889-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=ha-480889
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_25T10_09_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:09:48 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-480889-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:12:52 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 25 Oct 2025 10:12:42 +0000   Sat, 25 Oct 2025 10:15:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 25 Oct 2025 10:12:42 +0000   Sat, 25 Oct 2025 10:15:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 25 Oct 2025 10:12:42 +0000   Sat, 25 Oct 2025 10:15:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 25 Oct 2025 10:12:42 +0000   Sat, 25 Oct 2025 10:15:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-480889-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                ff37a051-18dc-4a4f-b50d-96619333e2c3
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-gzkw5                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-480889-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-92p8z                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-480889-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-480889-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-4d5ks                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-480889-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-480889-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                Age    From             Message
	  ----    ------                ----   ----             -------
	  Normal  Starting              11m    kube-proxy       
	  Normal  RegisteredNode        11m    node-controller  Node ha-480889-m03 event: Registered Node ha-480889-m03 in Controller
	  Normal  CIDRAssignmentFailed  11m    cidrAllocator    Node ha-480889-m03 status is now: CIDRAssignmentFailed
	  Normal  CIDRAssignmentFailed  11m    cidrAllocator    Node ha-480889-m03 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode        11m    node-controller  Node ha-480889-m03 event: Registered Node ha-480889-m03 in Controller
	  Normal  RegisteredNode        11m    node-controller  Node ha-480889-m03 event: Registered Node ha-480889-m03 in Controller
	  Normal  RegisteredNode        7m36s  node-controller  Node ha-480889-m03 event: Registered Node ha-480889-m03 in Controller
	  Normal  RegisteredNode        7m24s  node-controller  Node ha-480889-m03 event: Registered Node ha-480889-m03 in Controller
	  Normal  NodeNotReady          6m46s  node-controller  Node ha-480889-m03 status is now: NodeNotReady
	
	
	Name:               ha-480889-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-480889-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=ha-480889
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_25T10_11_08_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:11:07 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-480889-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:12:49 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 25 Oct 2025 10:11:49 +0000   Sat, 25 Oct 2025 10:15:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 25 Oct 2025 10:11:49 +0000   Sat, 25 Oct 2025 10:15:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 25 Oct 2025 10:11:49 +0000   Sat, 25 Oct 2025 10:15:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 25 Oct 2025 10:11:49 +0000   Sat, 25 Oct 2025 10:15:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-480889-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                cf43d700-f979-45ff-9dc8-5f80581e56db
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2fqxj       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-9rtcs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-480889-m04 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-480889-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-480889-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-480889-m04 event: Registered Node ha-480889-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-480889-m04 event: Registered Node ha-480889-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-480889-m04 event: Registered Node ha-480889-m04 in Controller
	  Normal   NodeReady                9m58s              kubelet          Node ha-480889-m04 status is now: NodeReady
	  Normal   RegisteredNode           7m36s              node-controller  Node ha-480889-m04 event: Registered Node ha-480889-m04 in Controller
	  Normal   RegisteredNode           7m24s              node-controller  Node ha-480889-m04 event: Registered Node ha-480889-m04 in Controller
	  Normal   NodeNotReady             6m46s              node-controller  Node ha-480889-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Oct25 09:34] overlayfs: idmapped layers are currently not supported
	[ +28.289706] overlayfs: idmapped layers are currently not supported
	[Oct25 09:35] overlayfs: idmapped layers are currently not supported
	[Oct25 09:36] overlayfs: idmapped layers are currently not supported
	[ +24.160248] overlayfs: idmapped layers are currently not supported
	[Oct25 09:37] overlayfs: idmapped layers are currently not supported
	[  +8.216028] overlayfs: idmapped layers are currently not supported
	[Oct25 09:38] overlayfs: idmapped layers are currently not supported
	[Oct25 09:39] overlayfs: idmapped layers are currently not supported
	[Oct25 09:41] overlayfs: idmapped layers are currently not supported
	[ +14.126672] overlayfs: idmapped layers are currently not supported
	[Oct25 09:42] overlayfs: idmapped layers are currently not supported
	[Oct25 09:43] overlayfs: idmapped layers are currently not supported
	[Oct25 09:45] kauditd_printk_skb: 8 callbacks suppressed
	[Oct25 09:47] overlayfs: idmapped layers are currently not supported
	[Oct25 09:53] overlayfs: idmapped layers are currently not supported
	[Oct25 09:54] overlayfs: idmapped layers are currently not supported
	[Oct25 10:07] overlayfs: idmapped layers are currently not supported
	[Oct25 10:08] overlayfs: idmapped layers are currently not supported
	[Oct25 10:09] overlayfs: idmapped layers are currently not supported
	[Oct25 10:11] overlayfs: idmapped layers are currently not supported
	[Oct25 10:12] overlayfs: idmapped layers are currently not supported
	[Oct25 10:13] overlayfs: idmapped layers are currently not supported
	[  +4.737500] overlayfs: idmapped layers are currently not supported
	[Oct25 10:14] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [07e7673199f69cfda9e91af2a66aad345a2ce7a92130398dd12fc4e17470e088] <==
	{"level":"warn","ts":"2025-10-25T10:21:19.907151Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"192220adada3ae40","rtt":"70.711228ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:22.581848Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"192220adada3ae40","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:22.581897Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"192220adada3ae40","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:24.907915Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"192220adada3ae40","rtt":"70.711228ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:24.907985Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"192220adada3ae40","rtt":"70.888112ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:26.583264Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"192220adada3ae40","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:26.583311Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"192220adada3ae40","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:29.909080Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"192220adada3ae40","rtt":"70.888112ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:29.909094Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"192220adada3ae40","rtt":"70.711228ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:30.585230Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"192220adada3ae40","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:30.585294Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"192220adada3ae40","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:34.587327Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"192220adada3ae40","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:34.587378Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"192220adada3ae40","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:34.910075Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"192220adada3ae40","rtt":"70.711228ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:34.910094Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"192220adada3ae40","rtt":"70.888112ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:38.588624Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"192220adada3ae40","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:38.588686Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"192220adada3ae40","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:39.911123Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"192220adada3ae40","rtt":"70.888112ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:39.911133Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"192220adada3ae40","rtt":"70.711228ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:42.590215Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"192220adada3ae40","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:42.590268Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"192220adada3ae40","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:44.911813Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"192220adada3ae40","rtt":"70.711228ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:44.911830Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"192220adada3ae40","rtt":"70.888112ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:46.592415Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"192220adada3ae40","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:46.592493Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"192220adada3ae40","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	
	
	==> kernel <==
	 10:21:47 up  2:04,  0 user,  load average: 1.20, 1.48, 1.63
	Linux ha-480889 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fbcc0424a1c5f8864ade5ed9949267a842ff3cf9126f862facc9e1aa5eacffff] <==
	I1025 10:21:07.551450       1 main.go:324] Node ha-480889-m04 has CIDR [10.244.4.0/24] 
	I1025 10:21:17.552222       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 10:21:17.552260       1 main.go:301] handling current node
	I1025 10:21:17.552276       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1025 10:21:17.552283       1 main.go:324] Node ha-480889-m02 has CIDR [10.244.1.0/24] 
	I1025 10:21:17.552423       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1025 10:21:17.552437       1 main.go:324] Node ha-480889-m03 has CIDR [10.244.3.0/24] 
	I1025 10:21:17.552488       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1025 10:21:17.552499       1 main.go:324] Node ha-480889-m04 has CIDR [10.244.4.0/24] 
	I1025 10:21:27.558068       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 10:21:27.558104       1 main.go:301] handling current node
	I1025 10:21:27.558122       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1025 10:21:27.558129       1 main.go:324] Node ha-480889-m02 has CIDR [10.244.1.0/24] 
	I1025 10:21:27.558590       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1025 10:21:27.558620       1 main.go:324] Node ha-480889-m03 has CIDR [10.244.3.0/24] 
	I1025 10:21:27.558936       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1025 10:21:27.559025       1 main.go:324] Node ha-480889-m04 has CIDR [10.244.4.0/24] 
	I1025 10:21:37.558162       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 10:21:37.558266       1 main.go:301] handling current node
	I1025 10:21:37.558288       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1025 10:21:37.558296       1 main.go:324] Node ha-480889-m02 has CIDR [10.244.1.0/24] 
	I1025 10:21:37.558464       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1025 10:21:37.558476       1 main.go:324] Node ha-480889-m03 has CIDR [10.244.3.0/24] 
	I1025 10:21:37.558535       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1025 10:21:37.558545       1 main.go:324] Node ha-480889-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [170a3a9364b5079051bd3c5c594733a45ac4ddd6193638cc413453308f5c0fac] <==
	I1025 10:14:03.967743       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 10:14:03.967829       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 10:14:03.973267       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:14:04.026613       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:14:04.032481       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 10:14:04.052238       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 10:14:04.052324       1 policy_source.go:240] refreshing policies
	I1025 10:14:04.058097       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:14:04.061386       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 10:14:04.072018       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:14:04.072031       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 10:14:04.084688       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	W1025 10:14:04.098372       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1025 10:14:04.099889       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:14:04.119656       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 10:14:04.126503       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:14:04.130630       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 10:14:04.130701       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:14:04.131677       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1025 10:14:04.134712       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	W1025 10:14:05.447579       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1025 10:14:06.750594       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:14:38.917463       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:14:48.841975       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:15:01.983934       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [3d17e8c3e629ce1a8cc189e9334fe0f0ede8346a9b11bb7ab70d582f3df753dd] <==
	I1025 10:14:23.281949       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 10:14:23.286288       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 10:14:23.286403       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:14:23.299410       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 10:14:23.300876       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 10:14:23.306254       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:14:23.306381       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-480889-m04"
	I1025 10:14:23.311210       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 10:14:23.312754       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 10:14:23.313149       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 10:14:23.313534       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-480889-m02"
	I1025 10:14:23.313587       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-480889-m03"
	I1025 10:14:23.313614       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-480889-m04"
	I1025 10:14:23.313645       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-480889"
	I1025 10:14:23.313684       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 10:14:23.317005       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:14:23.344025       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:14:23.367963       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:14:23.368079       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:14:23.368095       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:14:38.895946       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-q2vqt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-q2vqt\": the object has been modified; please apply your changes to the latest version and try again"
	I1025 10:14:38.896149       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"f9cd3e42-b9dd-4a9e-9497-cb7c76655b63", APIVersion:"v1", ResourceVersion:"304", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-q2vqt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-q2vqt": the object has been modified; please apply your changes to the latest version and try again
	I1025 10:14:48.851349       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-q2vqt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-q2vqt\": the object has been modified; please apply your changes to the latest version and try again"
	I1025 10:14:48.851413       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"f9cd3e42-b9dd-4a9e-9497-cb7c76655b63", APIVersion:"v1", ResourceVersion:"304", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-q2vqt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-q2vqt": the object has been modified; please apply your changes to the latest version and try again
	I1025 10:20:12.113935       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-gzkw5"
	
	
	==> kube-controller-manager [3d23dbb42715f1bb9050f4885b532a8877bf1805090fe8f7db8038e263d7391a] <==
	I1025 10:13:51.570432       1 serving.go:386] Generated self-signed cert in-memory
	I1025 10:13:52.993227       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1025 10:13:52.993297       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:13:52.996751       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1025 10:13:52.996842       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1025 10:13:52.996860       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1025 10:13:52.996872       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1025 10:14:03.024456       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [2c07e2732f356b8a475ac49d8754bc57a66b40d6244caf09ba433eb3a403de55] <==
	I1025 10:14:07.748150       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:14:08.033949       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:14:08.140133       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:14:08.140226       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 10:14:08.140327       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:14:08.195750       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:14:08.196249       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:14:08.211646       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:14:08.212020       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:14:08.212082       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:14:08.213291       1 config.go:200] "Starting service config controller"
	I1025 10:14:08.217712       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:14:08.217781       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:14:08.217809       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:14:08.217847       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:14:08.217874       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:14:08.218634       1 config.go:309] "Starting node config controller"
	I1025 10:14:08.218703       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:14:08.218734       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:14:08.317931       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 10:14:08.318088       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:14:08.318104       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [322c2cc726dbd336dc6d64af52ed0d7374e34249ef33e160f4bc633c2590c50d] <==
	E1025 10:13:49.391976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:13:49.571750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 10:13:50.070254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 10:13:50.309013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 10:13:50.582259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:13:54.319884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:13:55.549322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:13:55.831850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 10:13:56.712002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 10:13:56.744458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 10:13:56.861512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 10:13:57.774322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:13:58.224119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 10:13:58.380289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 10:13:58.672474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:13:58.770260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:13:58.898191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 10:13:59.065845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 10:13:59.239221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 10:13:59.371086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 10:13:59.586515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 10:13:59.607070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 10:14:00.498025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:14:01.364396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1025 10:14:19.077422       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.542088     798 apiserver.go:52] "Watching apiserver"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.543114     798 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.562505     798 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-480889" podUID="07959933-b7f0-46ad-9fa2-d9c661db7882"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.565354     798 scope.go:117] "RemoveContainer" containerID="3d23dbb42715f1bb9050f4885b532a8877bf1805090fe8f7db8038e263d7391a"
	Oct 25 10:14:06 ha-480889 kubelet[798]: E1025 10:14:06.567060     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-480889_kube-system(9a81b87b3b974d940626f18d45a6aab1)\"" pod="kube-system/kube-controller-manager-ha-480889" podUID="9a81b87b3b974d940626f18d45a6aab1"
	Oct 25 10:14:06 ha-480889 kubelet[798]: E1025 10:14:06.615909     798 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-480889\" already exists" pod="kube-system/etcd-ha-480889"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.623931     798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f73a13738b45c11bf39c58ec6843885" path="/var/lib/kubelet/pods/4f73a13738b45c11bf39c58ec6843885/volumes"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.638563     798 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.700998     798 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-480889"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.701172     798 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-480889"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.705440     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/13833b7e-6794-4f30-8bec-20375bd481f2-cni-cfg\") pod \"kindnet-8fgmd\" (UID: \"13833b7e-6794-4f30-8bec-20375bd481f2\") " pod="kube-system/kindnet-8fgmd"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.705602     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13833b7e-6794-4f30-8bec-20375bd481f2-xtables-lock\") pod \"kindnet-8fgmd\" (UID: \"13833b7e-6794-4f30-8bec-20375bd481f2\") " pod="kube-system/kindnet-8fgmd"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.705700     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e73b3f75-02d7-46e3-940c-ffd727e4c87d-lib-modules\") pod \"kube-proxy-6x5rb\" (UID: \"e73b3f75-02d7-46e3-940c-ffd727e4c87d\") " pod="kube-system/kube-proxy-6x5rb"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.705777     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13833b7e-6794-4f30-8bec-20375bd481f2-lib-modules\") pod \"kindnet-8fgmd\" (UID: \"13833b7e-6794-4f30-8bec-20375bd481f2\") " pod="kube-system/kindnet-8fgmd"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.705902     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e73b3f75-02d7-46e3-940c-ffd727e4c87d-xtables-lock\") pod \"kube-proxy-6x5rb\" (UID: \"e73b3f75-02d7-46e3-940c-ffd727e4c87d\") " pod="kube-system/kube-proxy-6x5rb"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.706049     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/15113825-bb63-434f-bd5e-2ffd789452d6-tmp\") pod \"storage-provisioner\" (UID: \"15113825-bb63-434f-bd5e-2ffd789452d6\") " pod="kube-system/storage-provisioner"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.763587     798 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.863460     798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-480889" podStartSLOduration=0.863439797 podStartE2EDuration="863.439797ms" podCreationTimestamp="2025-10-25 10:14:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:14:06.837585194 +0000 UTC m=+38.428191520" watchObservedRunningTime="2025-10-25 10:14:06.863439797 +0000 UTC m=+38.454046098"
	Oct 25 10:14:07 ha-480889 kubelet[798]: W1025 10:14:07.082204     798 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb/crio-221f4b21ed8c28b6fd1698347efb2e67bd612d196fc843d8d64f3be9c60b2221 WatchSource:0}: Error finding container 221f4b21ed8c28b6fd1698347efb2e67bd612d196fc843d8d64f3be9c60b2221: Status 404 returned error can't find the container with id 221f4b21ed8c28b6fd1698347efb2e67bd612d196fc843d8d64f3be9c60b2221
	Oct 25 10:14:08 ha-480889 kubelet[798]: I1025 10:14:08.400429     798 scope.go:117] "RemoveContainer" containerID="3d23dbb42715f1bb9050f4885b532a8877bf1805090fe8f7db8038e263d7391a"
	Oct 25 10:14:08 ha-480889 kubelet[798]: E1025 10:14:08.400599     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-480889_kube-system(9a81b87b3b974d940626f18d45a6aab1)\"" pod="kube-system/kube-controller-manager-ha-480889" podUID="9a81b87b3b974d940626f18d45a6aab1"
	Oct 25 10:14:20 ha-480889 kubelet[798]: I1025 10:14:20.620615     798 scope.go:117] "RemoveContainer" containerID="3d23dbb42715f1bb9050f4885b532a8877bf1805090fe8f7db8038e263d7391a"
	Oct 25 10:14:28 ha-480889 kubelet[798]: E1025 10:14:28.527965     798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"863261f33854b355adb7cf1877fbfb03718ee4a2555370d2149d1cdc448496bb\": container with ID starting with 863261f33854b355adb7cf1877fbfb03718ee4a2555370d2149d1cdc448496bb not found: ID does not exist" containerID="863261f33854b355adb7cf1877fbfb03718ee4a2555370d2149d1cdc448496bb"
	Oct 25 10:14:28 ha-480889 kubelet[798]: I1025 10:14:28.528089     798 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="863261f33854b355adb7cf1877fbfb03718ee4a2555370d2149d1cdc448496bb" err="rpc error: code = NotFound desc = could not find container \"863261f33854b355adb7cf1877fbfb03718ee4a2555370d2149d1cdc448496bb\": container with ID starting with 863261f33854b355adb7cf1877fbfb03718ee4a2555370d2149d1cdc448496bb not found: ID does not exist"
	Oct 25 10:14:37 ha-480889 kubelet[798]: I1025 10:14:37.870067     798 scope.go:117] "RemoveContainer" containerID="9e45eacfcf479b2839ca5aa015423a2b920806c92232de9220ff03c17f84e584"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-480889 -n ha-480889
helpers_test.go:269: (dbg) Run:  kubectl --context ha-480889 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-q5kt7
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-480889 describe pod busybox-7b57f96db7-q5kt7
helpers_test.go:290: (dbg) kubectl --context ha-480889 describe pod busybox-7b57f96db7-q5kt7:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-q5kt7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r5xf9 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-r5xf9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  96s   default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  96s   default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (535.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-480889 node delete m03 --alsologtostderr -v 5: (5.282080797s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-480889 status --alsologtostderr -v 5: exit status 7 (649.45231ms)

                                                
                                                
-- stdout --
	ha-480889
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-480889-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-480889-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:21:53.979203  314168 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:21:53.979399  314168 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:21:53.979432  314168 out.go:374] Setting ErrFile to fd 2...
	I1025 10:21:53.979459  314168 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:21:53.979801  314168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:21:53.980033  314168 out.go:368] Setting JSON to false
	I1025 10:21:53.980102  314168 mustload.go:65] Loading cluster: ha-480889
	I1025 10:21:53.980178  314168 notify.go:220] Checking for updates...
	I1025 10:21:53.980566  314168 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:21:53.980600  314168 status.go:174] checking status of ha-480889 ...
	I1025 10:21:53.981133  314168 cli_runner.go:164] Run: docker container inspect ha-480889 --format={{.State.Status}}
	I1025 10:21:54.003165  314168 status.go:371] ha-480889 host status = "Running" (err=<nil>)
	I1025 10:21:54.003193  314168 host.go:66] Checking if "ha-480889" exists ...
	I1025 10:21:54.003554  314168 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889
	I1025 10:21:54.043340  314168 host.go:66] Checking if "ha-480889" exists ...
	I1025 10:21:54.043645  314168 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:21:54.043694  314168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:21:54.067984  314168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:21:54.176238  314168 ssh_runner.go:195] Run: systemctl --version
	I1025 10:21:54.183110  314168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:21:54.196090  314168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:21:54.279763  314168 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 10:21:54.267685971 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:21:54.280326  314168 kubeconfig.go:125] found "ha-480889" server: "https://192.168.49.254:8443"
	I1025 10:21:54.280373  314168 api_server.go:166] Checking apiserver status ...
	I1025 10:21:54.280424  314168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:21:54.292736  314168 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/852/cgroup
	I1025 10:21:54.301366  314168 api_server.go:182] apiserver freezer: "9:freezer:/docker/808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb/crio/crio-170a3a9364b5079051bd3c5c594733a45ac4ddd6193638cc413453308f5c0fac"
	I1025 10:21:54.301443  314168 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb/crio/crio-170a3a9364b5079051bd3c5c594733a45ac4ddd6193638cc413453308f5c0fac/freezer.state
	I1025 10:21:54.309761  314168 api_server.go:204] freezer state: "THAWED"
	I1025 10:21:54.309786  314168 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1025 10:21:54.323065  314168 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1025 10:21:54.323095  314168 status.go:463] ha-480889 apiserver status = Running (err=<nil>)
	I1025 10:21:54.323106  314168 status.go:176] ha-480889 status: &{Name:ha-480889 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 10:21:54.323124  314168 status.go:174] checking status of ha-480889-m02 ...
	I1025 10:21:54.323445  314168 cli_runner.go:164] Run: docker container inspect ha-480889-m02 --format={{.State.Status}}
	I1025 10:21:54.340974  314168 status.go:371] ha-480889-m02 host status = "Running" (err=<nil>)
	I1025 10:21:54.341001  314168 host.go:66] Checking if "ha-480889-m02" exists ...
	I1025 10:21:54.341306  314168 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m02
	I1025 10:21:54.362170  314168 host.go:66] Checking if "ha-480889-m02" exists ...
	I1025 10:21:54.362536  314168 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:21:54.362586  314168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:21:54.380291  314168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m02/id_rsa Username:docker}
	I1025 10:21:54.487549  314168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:21:54.500701  314168 kubeconfig.go:125] found "ha-480889" server: "https://192.168.49.254:8443"
	I1025 10:21:54.500734  314168 api_server.go:166] Checking apiserver status ...
	I1025 10:21:54.500798  314168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:21:54.513051  314168 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1180/cgroup
	I1025 10:21:54.521883  314168 api_server.go:182] apiserver freezer: "9:freezer:/docker/4460fe3c2fdd1c53d2e5a10936768ef179560c136c570edacd7d896b06538c9e/crio/crio-18de4b87d447ab79a015448a3c8d3ffe858fcc6b955cec103a0469a3216220d3"
	I1025 10:21:54.521966  314168 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4460fe3c2fdd1c53d2e5a10936768ef179560c136c570edacd7d896b06538c9e/crio/crio-18de4b87d447ab79a015448a3c8d3ffe858fcc6b955cec103a0469a3216220d3/freezer.state
	I1025 10:21:54.530641  314168 api_server.go:204] freezer state: "THAWED"
	I1025 10:21:54.530721  314168 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1025 10:21:54.539742  314168 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1025 10:21:54.539779  314168 status.go:463] ha-480889-m02 apiserver status = Running (err=<nil>)
	I1025 10:21:54.539789  314168 status.go:176] ha-480889-m02 status: &{Name:ha-480889-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 10:21:54.539807  314168 status.go:174] checking status of ha-480889-m04 ...
	I1025 10:21:54.540123  314168 cli_runner.go:164] Run: docker container inspect ha-480889-m04 --format={{.State.Status}}
	I1025 10:21:54.558095  314168 status.go:371] ha-480889-m04 host status = "Stopped" (err=<nil>)
	I1025 10:21:54.558119  314168 status.go:384] host is not running, skipping remaining checks
	I1025 10:21:54.558126  314168 status.go:176] ha-480889-m04 status: &{Name:ha-480889-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-arm64 -p ha-480889 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-480889
helpers_test.go:243: (dbg) docker inspect ha-480889:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb",
	        "Created": "2025-10-25T10:07:16.735876836Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 308208,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:13:21.399696936Z",
	            "FinishedAt": "2025-10-25T10:13:20.79843666Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb/hostname",
	        "HostsPath": "/var/lib/docker/containers/808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb/hosts",
	        "LogPath": "/var/lib/docker/containers/808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb/808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb-json.log",
	        "Name": "/ha-480889",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-480889:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-480889",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb",
	                "LowerDir": "/var/lib/docker/overlay2/d159db5e3fba2acaf2be751adfd990d9559a06fe4315850b3c072a95af080135-init/diff:/var/lib/docker/overlay2/a746402fafa86965bd9394d930bb96ec16982037f9f5e7f4f1de6d09576ff851/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d159db5e3fba2acaf2be751adfd990d9559a06fe4315850b3c072a95af080135/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d159db5e3fba2acaf2be751adfd990d9559a06fe4315850b3c072a95af080135/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d159db5e3fba2acaf2be751adfd990d9559a06fe4315850b3c072a95af080135/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-480889",
	                "Source": "/var/lib/docker/volumes/ha-480889/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-480889",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-480889",
	                "name.minikube.sigs.k8s.io": "ha-480889",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "791d4899d5afa7873aa61454e9b98c6bf4cae328e5fac1d61bfb6966ee8cf636",
	            "SandboxKey": "/var/run/docker/netns/791d4899d5af",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33177"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33176"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-480889": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:5c:03:eb:9b:24",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2218a4d410c8591103e2cd6973cfcc03970e864955c570ceafd8b830a42f8a91",
	                    "EndpointID": "f005f7f20c8dfee253108089d9a6288d3bd36c3e1a48e0821c1ab3d225d34362",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-480889",
	                        "808d21fd84e3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-480889 -n ha-480889
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-480889 logs -n 25: (1.343972937s)
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-480889 ssh -n ha-480889-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m02 sudo cat /home/docker/cp-test_ha-480889-m03_ha-480889-m02.txt                                         │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ cp      │ ha-480889 cp ha-480889-m03:/home/docker/cp-test.txt ha-480889-m04:/home/docker/cp-test_ha-480889-m03_ha-480889-m04.txt               │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m04 sudo cat /home/docker/cp-test_ha-480889-m03_ha-480889-m04.txt                                         │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ cp      │ ha-480889 cp testdata/cp-test.txt ha-480889-m04:/home/docker/cp-test.txt                                                             │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ cp      │ ha-480889 cp ha-480889-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3016407791/001/cp-test_ha-480889-m04.txt │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ cp      │ ha-480889 cp ha-480889-m04:/home/docker/cp-test.txt ha-480889:/home/docker/cp-test_ha-480889-m04_ha-480889.txt                       │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889 sudo cat /home/docker/cp-test_ha-480889-m04_ha-480889.txt                                                 │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ cp      │ ha-480889 cp ha-480889-m04:/home/docker/cp-test.txt ha-480889-m02:/home/docker/cp-test_ha-480889-m04_ha-480889-m02.txt               │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m02 sudo cat /home/docker/cp-test_ha-480889-m04_ha-480889-m02.txt                                         │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ cp      │ ha-480889 cp ha-480889-m04:/home/docker/cp-test.txt ha-480889-m03:/home/docker/cp-test_ha-480889-m04_ha-480889-m03.txt               │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m03 sudo cat /home/docker/cp-test_ha-480889-m04_ha-480889-m03.txt                                         │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ node    │ ha-480889 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ node    │ ha-480889 node start m02 --alsologtostderr -v 5                                                                                      │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ node    │ ha-480889 node list --alsologtostderr -v 5                                                                                           │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ stop    │ ha-480889 stop --alsologtostderr -v 5                                                                                                │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:13 UTC │
	│ start   │ ha-480889 start --wait true --alsologtostderr -v 5                                                                                   │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │                     │
	│ node    │ ha-480889 node list --alsologtostderr -v 5                                                                                           │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	│ node    │ ha-480889 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:13:21
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:13:21.133168  308083 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:13:21.133290  308083 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:13:21.133303  308083 out.go:374] Setting ErrFile to fd 2...
	I1025 10:13:21.133309  308083 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:13:21.133562  308083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:13:21.133919  308083 out.go:368] Setting JSON to false
	I1025 10:13:21.134805  308083 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6953,"bootTime":1761380249,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 10:13:21.134877  308083 start.go:141] virtualization:  
	I1025 10:13:21.140316  308083 out.go:179] * [ha-480889] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:13:21.143327  308083 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:13:21.143404  308083 notify.go:220] Checking for updates...
	I1025 10:13:21.149301  308083 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:13:21.152089  308083 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:13:21.154925  308083 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 10:13:21.157773  308083 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:13:21.160618  308083 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:13:21.164113  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:21.164223  308083 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:13:21.197583  308083 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:13:21.197765  308083 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:13:21.253016  308083 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-25 10:13:21.243524818 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:13:21.253128  308083 docker.go:318] overlay module found
	I1025 10:13:21.256213  308083 out.go:179] * Using the docker driver based on existing profile
	I1025 10:13:21.259079  308083 start.go:305] selected driver: docker
	I1025 10:13:21.259120  308083 start.go:925] validating driver "docker" against &{Name:ha-480889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:13:21.259253  308083 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:13:21.259348  308083 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:13:21.316248  308083 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-25 10:13:21.30638419 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:13:21.316658  308083 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:13:21.316688  308083 cni.go:84] Creating CNI manager for ""
	I1025 10:13:21.316750  308083 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1025 10:13:21.316803  308083 start.go:349] cluster config:
	{Name:ha-480889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:13:21.320059  308083 out.go:179] * Starting "ha-480889" primary control-plane node in "ha-480889" cluster
	I1025 10:13:21.322881  308083 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:13:21.325849  308083 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:13:21.328624  308083 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:13:21.328676  308083 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 10:13:21.328688  308083 cache.go:58] Caching tarball of preloaded images
	I1025 10:13:21.328730  308083 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:13:21.328805  308083 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:13:21.328816  308083 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:13:21.328961  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:13:21.348972  308083 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:13:21.348996  308083 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:13:21.349014  308083 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:13:21.349046  308083 start.go:360] acquireMachinesLock for ha-480889: {Name:mk41781a5f7df8ed38323f26b29dd3de0536d841 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:13:21.349099  308083 start.go:364] duration metric: took 35.972µs to acquireMachinesLock for "ha-480889"
	I1025 10:13:21.349123  308083 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:13:21.349129  308083 fix.go:54] fixHost starting: 
	I1025 10:13:21.349386  308083 cli_runner.go:164] Run: docker container inspect ha-480889 --format={{.State.Status}}
	I1025 10:13:21.366278  308083 fix.go:112] recreateIfNeeded on ha-480889: state=Stopped err=<nil>
	W1025 10:13:21.366311  308083 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:13:21.369548  308083 out.go:252] * Restarting existing docker container for "ha-480889" ...
	I1025 10:13:21.369634  308083 cli_runner.go:164] Run: docker start ha-480889
	I1025 10:13:21.622973  308083 cli_runner.go:164] Run: docker container inspect ha-480889 --format={{.State.Status}}
	I1025 10:13:21.639685  308083 kic.go:430] container "ha-480889" state is running.
	I1025 10:13:21.640060  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889
	I1025 10:13:21.659744  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:13:21.659977  308083 machine.go:93] provisionDockerMachine start ...
	I1025 10:13:21.660037  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:21.679901  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:21.680217  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1025 10:13:21.680227  308083 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:13:21.681077  308083 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37726->127.0.0.1:33173: read: connection reset by peer
	I1025 10:13:24.829722  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-480889
	
	I1025 10:13:24.829748  308083 ubuntu.go:182] provisioning hostname "ha-480889"
	I1025 10:13:24.829819  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:24.848138  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:24.848455  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1025 10:13:24.848472  308083 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-480889 && echo "ha-480889" | sudo tee /etc/hostname
	I1025 10:13:25.012654  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-480889
	
	I1025 10:13:25.012743  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:25.031520  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:25.031847  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1025 10:13:25.031875  308083 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-480889' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-480889/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-480889' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:13:25.182388  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:13:25.182461  308083 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:13:25.182530  308083 ubuntu.go:190] setting up certificates
	I1025 10:13:25.182567  308083 provision.go:84] configureAuth start
	I1025 10:13:25.182666  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889
	I1025 10:13:25.200092  308083 provision.go:143] copyHostCerts
	I1025 10:13:25.200133  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:13:25.200165  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:13:25.200172  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:13:25.200245  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:13:25.200331  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:13:25.200352  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:13:25.200357  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:13:25.200382  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:13:25.200423  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:13:25.200438  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:13:25.200442  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:13:25.200464  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:13:25.200507  308083 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.ha-480889 san=[127.0.0.1 192.168.49.2 ha-480889 localhost minikube]
	I1025 10:13:25.925035  308083 provision.go:177] copyRemoteCerts
	I1025 10:13:25.925106  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:13:25.925148  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:25.941975  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:13:26.046168  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 10:13:26.046249  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:13:26.065892  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 10:13:26.065964  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1025 10:13:26.086519  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 10:13:26.086582  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:13:26.105106  308083 provision.go:87] duration metric: took 922.501142ms to configureAuth
	I1025 10:13:26.105133  308083 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:13:26.105365  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:26.105486  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:26.123735  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:26.124045  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1025 10:13:26.124102  308083 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:13:26.451879  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:13:26.451953  308083 machine.go:96] duration metric: took 4.791965867s to provisionDockerMachine
	I1025 10:13:26.451985  308083 start.go:293] postStartSetup for "ha-480889" (driver="docker")
	I1025 10:13:26.452035  308083 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:13:26.452145  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:13:26.452222  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:26.474611  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:13:26.586070  308083 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:13:26.589442  308083 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:13:26.589480  308083 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:13:26.589492  308083 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:13:26.589557  308083 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:13:26.589654  308083 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:13:26.589667  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> /etc/ssl/certs/2612562.pem
	I1025 10:13:26.589769  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:13:26.597470  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:13:26.615616  308083 start.go:296] duration metric: took 163.578765ms for postStartSetup
	I1025 10:13:26.615697  308083 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:13:26.615759  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:26.632968  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:13:26.735211  308083 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:13:26.740030  308083 fix.go:56] duration metric: took 5.390893179s for fixHost
	I1025 10:13:26.740056  308083 start.go:83] releasing machines lock for "ha-480889", held for 5.390944264s
	I1025 10:13:26.740127  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889
	I1025 10:13:26.756884  308083 ssh_runner.go:195] Run: cat /version.json
	I1025 10:13:26.756940  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:26.756964  308083 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:13:26.757017  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:26.775539  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:13:26.778199  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:13:26.873785  308083 ssh_runner.go:195] Run: systemctl --version
	I1025 10:13:26.965654  308083 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:13:27.005417  308083 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:13:27.010728  308083 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:13:27.010810  308083 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:13:27.019133  308083 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:13:27.019158  308083 start.go:495] detecting cgroup driver to use...
	I1025 10:13:27.019210  308083 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:13:27.019280  308083 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:13:27.034337  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:13:27.047938  308083 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:13:27.048000  308083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:13:27.063832  308083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:13:27.081381  308083 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:13:27.198834  308083 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:13:27.303413  308083 docker.go:234] disabling docker service ...
	I1025 10:13:27.303534  308083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:13:27.318254  308083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:13:27.331149  308083 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:13:27.440477  308083 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:13:27.554598  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:13:27.567225  308083 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:13:27.581183  308083 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:13:27.581264  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.590278  308083 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:13:27.590389  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.599250  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.607897  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.616848  308083 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:13:27.625132  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.634834  308083 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.643393  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.653830  308083 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:13:27.661579  308083 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:13:27.669371  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:27.781686  308083 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:13:27.909770  308083 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:13:27.909891  308083 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:13:27.913604  308083 start.go:563] Will wait 60s for crictl version
	I1025 10:13:27.913677  308083 ssh_runner.go:195] Run: which crictl
	I1025 10:13:27.917354  308083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:13:27.943799  308083 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:13:27.943944  308083 ssh_runner.go:195] Run: crio --version
	I1025 10:13:27.972380  308083 ssh_runner.go:195] Run: crio --version
	I1025 10:13:28.006726  308083 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:13:28.009638  308083 cli_runner.go:164] Run: docker network inspect ha-480889 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:13:28.029757  308083 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 10:13:28.033806  308083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:13:28.045238  308083 kubeadm.go:883] updating cluster {Name:ha-480889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:13:28.046168  308083 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:13:28.046264  308083 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:13:28.081721  308083 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:13:28.081747  308083 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:13:28.081804  308083 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:13:28.109690  308083 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:13:28.109715  308083 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:13:28.109724  308083 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1025 10:13:28.109840  308083 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-480889 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:13:28.109926  308083 ssh_runner.go:195] Run: crio config
	I1025 10:13:28.181906  308083 cni.go:84] Creating CNI manager for ""
	I1025 10:13:28.181927  308083 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1025 10:13:28.181947  308083 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:13:28.181970  308083 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-480889 NodeName:ha-480889 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:13:28.182120  308083 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-480889"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:13:28.182142  308083 kube-vip.go:115] generating kube-vip config ...
	I1025 10:13:28.182194  308083 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1025 10:13:28.194754  308083 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:13:28.194852  308083 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1025 10:13:28.194915  308083 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:13:28.202716  308083 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:13:28.202791  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1025 10:13:28.211249  308083 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1025 10:13:28.224427  308083 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:13:28.236965  308083 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1025 10:13:28.249237  308083 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1025 10:13:28.261093  308083 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1025 10:13:28.265704  308083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:13:28.275389  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:28.388284  308083 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:13:28.404560  308083 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889 for IP: 192.168.49.2
	I1025 10:13:28.404624  308083 certs.go:195] generating shared ca certs ...
	I1025 10:13:28.404659  308083 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:28.404824  308083 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:13:28.404900  308083 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:13:28.404925  308083 certs.go:257] generating profile certs ...
	I1025 10:13:28.405027  308083 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.key
	I1025 10:13:28.405078  308083 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key.a013837d
	I1025 10:13:28.405107  308083 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt.a013837d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1025 10:13:29.281974  308083 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt.a013837d ...
	I1025 10:13:29.282465  308083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt.a013837d: {Name:mk2ee9cff9ddeca542ff438d607ca92d489e621a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:29.282692  308083 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key.a013837d ...
	I1025 10:13:29.282818  308083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key.a013837d: {Name:mk666a1056a90e3af7ff477b2ecc4f82c52a5311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:29.282987  308083 certs.go:382] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt.a013837d -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt
	I1025 10:13:29.283272  308083 certs.go:386] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key.a013837d -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key
	I1025 10:13:29.283463  308083 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key
	I1025 10:13:29.283498  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 10:13:29.283530  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 10:13:29.283570  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 10:13:29.283605  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 10:13:29.283633  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1025 10:13:29.283680  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1025 10:13:29.283712  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1025 10:13:29.283743  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1025 10:13:29.283826  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:13:29.283879  308083 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:13:29.283905  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:13:29.283959  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:13:29.284007  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:13:29.284066  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:13:29.284138  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:13:29.284221  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> /usr/share/ca-certificates/2612562.pem
	I1025 10:13:29.284263  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:29.284295  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem -> /usr/share/ca-certificates/261256.pem
	I1025 10:13:29.284844  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:13:29.339963  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:13:29.378039  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:13:29.412109  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:13:29.439404  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 10:13:29.471848  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:13:29.495108  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:13:29.521223  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:13:29.555889  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:13:29.583865  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:13:29.607803  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:13:29.660341  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:13:29.687106  308083 ssh_runner.go:195] Run: openssl version
	I1025 10:13:29.696444  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:13:29.707221  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:13:29.717578  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:13:29.717659  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:13:29.790492  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:13:29.802381  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:13:29.810802  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:29.815111  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:29.815223  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:29.864875  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:13:29.872882  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:13:29.882139  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:13:29.887141  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:13:29.887254  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:13:29.933083  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:13:29.942393  308083 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:13:29.946745  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:13:29.992960  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:13:30.044394  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:13:30.092620  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:13:30.151671  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:13:30.195276  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:13:30.238904  308083 kubeadm.go:400] StartCluster: {Name:ha-480889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:13:30.239101  308083 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:13:30.239204  308083 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:13:30.304407  308083 cri.go:89] found id: "07e7673199f69cfda9e91af2a66aad345a2ce7a92130398dd12fc4e17470e088"
	I1025 10:13:30.304479  308083 cri.go:89] found id: "9e3b516f6f15caae43bda25f85832b5ad9a201e6c7b833a1ba0ec9db87f687fd"
	I1025 10:13:30.304499  308083 cri.go:89] found id: "0b2d139004d5afcec6c5e7f18831bff8c069ba521b289758825ffdd6fd892697"
	I1025 10:13:30.304523  308083 cri.go:89] found id: "322c2cc726dbd336dc6d64af52ed0d7374e34249ef33e160f4bc633c2590c50d"
	I1025 10:13:30.304554  308083 cri.go:89] found id: "170a3a9364b5079051bd3c5c594733a45ac4ddd6193638cc413453308f5c0fac"
	I1025 10:13:30.304578  308083 cri.go:89] found id: ""
	I1025 10:13:30.304661  308083 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:13:30.328956  308083 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:13:30Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:13:30.329101  308083 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:13:30.340608  308083 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:13:30.340681  308083 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:13:30.340762  308083 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:13:30.351736  308083 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:13:30.352209  308083 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-480889" does not appear in /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:13:30.352379  308083 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-259409/kubeconfig needs updating (will repair): [kubeconfig missing "ha-480889" cluster setting kubeconfig missing "ha-480889" context setting]
	I1025 10:13:30.352687  308083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:30.353275  308083 kapi.go:59] client config for ha-480889: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.key", CAFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 10:13:30.354022  308083 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1025 10:13:30.354112  308083 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1025 10:13:30.354147  308083 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 10:13:30.354173  308083 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1025 10:13:30.354194  308083 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1025 10:13:30.354220  308083 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 10:13:30.354596  308083 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:13:30.369232  308083 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1025 10:13:30.369295  308083 kubeadm.go:601] duration metric: took 28.594078ms to restartPrimaryControlPlane
	I1025 10:13:30.369334  308083 kubeadm.go:402] duration metric: took 130.438978ms to StartCluster
	I1025 10:13:30.369370  308083 settings.go:142] acquiring lock: {Name:mk78071aea90144fb2e8f63b90de4792d7a3522a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:30.369458  308083 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:13:30.370118  308083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:30.370359  308083 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:13:30.370404  308083 start.go:241] waiting for startup goroutines ...
	I1025 10:13:30.370435  308083 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:13:30.370975  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:30.376476  308083 out.go:179] * Enabled addons: 
	I1025 10:13:30.379493  308083 addons.go:514] duration metric: took 9.050073ms for enable addons: enabled=[]
	I1025 10:13:30.379556  308083 start.go:246] waiting for cluster config update ...
	I1025 10:13:30.379587  308083 start.go:255] writing updated cluster config ...
	I1025 10:13:30.382748  308083 out.go:203] 
	I1025 10:13:30.385876  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:30.386069  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:13:30.389383  308083 out.go:179] * Starting "ha-480889-m02" control-plane node in "ha-480889" cluster
	I1025 10:13:30.392170  308083 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:13:30.395076  308083 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:13:30.397919  308083 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:13:30.397962  308083 cache.go:58] Caching tarball of preloaded images
	I1025 10:13:30.398098  308083 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:13:30.398132  308083 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:13:30.398282  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:13:30.398534  308083 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:13:30.435730  308083 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:13:30.435756  308083 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:13:30.435773  308083 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:13:30.435796  308083 start.go:360] acquireMachinesLock for ha-480889-m02: {Name:mk5fa3d1d910363d3e584c1db68856801d0a168a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:13:30.435853  308083 start.go:364] duration metric: took 36.152µs to acquireMachinesLock for "ha-480889-m02"
	I1025 10:13:30.435879  308083 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:13:30.435886  308083 fix.go:54] fixHost starting: m02
	I1025 10:13:30.436144  308083 cli_runner.go:164] Run: docker container inspect ha-480889-m02 --format={{.State.Status}}
	I1025 10:13:30.486709  308083 fix.go:112] recreateIfNeeded on ha-480889-m02: state=Stopped err=<nil>
	W1025 10:13:30.486741  308083 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:13:30.490037  308083 out.go:252] * Restarting existing docker container for "ha-480889-m02" ...
	I1025 10:13:30.490126  308083 cli_runner.go:164] Run: docker start ha-480889-m02
	I1025 10:13:30.892304  308083 cli_runner.go:164] Run: docker container inspect ha-480889-m02 --format={{.State.Status}}
	I1025 10:13:30.928214  308083 kic.go:430] container "ha-480889-m02" state is running.
	I1025 10:13:30.928591  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m02
	I1025 10:13:30.962308  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:13:30.962572  308083 machine.go:93] provisionDockerMachine start ...
	I1025 10:13:30.962636  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:30.991814  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:30.992103  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1025 10:13:30.992112  308083 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:13:30.992798  308083 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53254->127.0.0.1:33178: read: connection reset by peer
	I1025 10:13:34.218384  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-480889-m02
	
	I1025 10:13:34.218468  308083 ubuntu.go:182] provisioning hostname "ha-480889-m02"
	I1025 10:13:34.218568  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:34.242087  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:34.242402  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1025 10:13:34.242413  308083 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-480889-m02 && echo "ha-480889-m02" | sudo tee /etc/hostname
	I1025 10:13:34.553498  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-480889-m02
	
	I1025 10:13:34.553579  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:34.605778  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:34.606154  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1025 10:13:34.606179  308083 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-480889-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-480889-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-480889-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:13:34.786380  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:13:34.786405  308083 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:13:34.786423  308083 ubuntu.go:190] setting up certificates
	I1025 10:13:34.786433  308083 provision.go:84] configureAuth start
	I1025 10:13:34.786494  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m02
	I1025 10:13:34.812196  308083 provision.go:143] copyHostCerts
	I1025 10:13:34.812238  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:13:34.812271  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:13:34.812277  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:13:34.812354  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:13:34.812427  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:13:34.812443  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:13:34.812448  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:13:34.812473  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:13:34.812508  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:13:34.812524  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:13:34.812528  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:13:34.812550  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:13:34.812594  308083 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.ha-480889-m02 san=[127.0.0.1 192.168.49.3 ha-480889-m02 localhost minikube]
	I1025 10:13:35.433499  308083 provision.go:177] copyRemoteCerts
	I1025 10:13:35.437355  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:13:35.437432  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:35.478086  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m02/id_rsa Username:docker}
	I1025 10:13:35.600269  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 10:13:35.600335  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:13:35.625245  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 10:13:35.625308  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 10:13:35.656095  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 10:13:35.656153  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:13:35.702462  308083 provision.go:87] duration metric: took 916.014065ms to configureAuth
	I1025 10:13:35.702539  308083 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:13:35.702849  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:35.703008  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:35.743726  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:35.744035  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1025 10:13:35.744050  308083 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:13:36.131741  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:13:36.131816  308083 machine.go:96] duration metric: took 5.16923304s to provisionDockerMachine
	I1025 10:13:36.131850  308083 start.go:293] postStartSetup for "ha-480889-m02" (driver="docker")
	I1025 10:13:36.131900  308083 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:13:36.132016  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:13:36.132089  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:36.151273  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m02/id_rsa Username:docker}
	I1025 10:13:36.257973  308083 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:13:36.261457  308083 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:13:36.261487  308083 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:13:36.261499  308083 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:13:36.261552  308083 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:13:36.261635  308083 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:13:36.261648  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> /etc/ssl/certs/2612562.pem
	I1025 10:13:36.261749  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:13:36.269152  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:13:36.286996  308083 start.go:296] duration metric: took 155.094351ms for postStartSetup
	I1025 10:13:36.287074  308083 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:13:36.287145  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:36.305008  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m02/id_rsa Username:docker}
	I1025 10:13:36.411951  308083 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:13:36.420078  308083 fix.go:56] duration metric: took 5.984184266s for fixHost
	I1025 10:13:36.420100  308083 start.go:83] releasing machines lock for "ha-480889-m02", held for 5.984233964s
	I1025 10:13:36.420167  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m02
	I1025 10:13:36.443663  308083 out.go:179] * Found network options:
	I1025 10:13:36.446961  308083 out.go:179]   - NO_PROXY=192.168.49.2
	W1025 10:13:36.450808  308083 proxy.go:120] fail to check proxy env: Error ip not in block
	W1025 10:13:36.450851  308083 proxy.go:120] fail to check proxy env: Error ip not in block
	I1025 10:13:36.450943  308083 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:13:36.450993  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:36.451266  308083 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:13:36.451340  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:36.496453  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m02/id_rsa Username:docker}
	I1025 10:13:36.500270  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m02/id_rsa Username:docker}
	I1025 10:13:36.756746  308083 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:13:36.868709  308083 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:13:36.868786  308083 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:13:36.881721  308083 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:13:36.881748  308083 start.go:495] detecting cgroup driver to use...
	I1025 10:13:36.881782  308083 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:13:36.881843  308083 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:13:36.907834  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:13:36.928826  308083 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:13:36.928911  308083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:13:36.951297  308083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:13:36.978500  308083 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:13:37.180812  308083 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:13:37.373723  308083 docker.go:234] disabling docker service ...
	I1025 10:13:37.373791  308083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:13:37.390746  308083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:13:37.405594  308083 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:13:37.625534  308083 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:13:37.834157  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:13:37.849602  308083 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:13:37.879998  308083 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:13:37.880065  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.894893  308083 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:13:37.894974  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.912955  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.922956  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.937706  308083 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:13:37.948806  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.959464  308083 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.972181  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.983464  308083 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:13:38.003743  308083 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:13:38.037815  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:38.334072  308083 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:13:39.163742  308083 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:13:39.163831  308083 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:13:39.169004  308083 start.go:563] Will wait 60s for crictl version
	I1025 10:13:39.169072  308083 ssh_runner.go:195] Run: which crictl
	I1025 10:13:39.173735  308083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:13:39.204784  308083 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:13:39.204890  308083 ssh_runner.go:195] Run: crio --version
	I1025 10:13:39.239278  308083 ssh_runner.go:195] Run: crio --version
	I1025 10:13:39.276711  308083 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:13:39.279715  308083 out.go:179]   - env NO_PROXY=192.168.49.2
	I1025 10:13:39.282816  308083 cli_runner.go:164] Run: docker network inspect ha-480889 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:13:39.299629  308083 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 10:13:39.303856  308083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:13:39.314044  308083 mustload.go:65] Loading cluster: ha-480889
	I1025 10:13:39.314294  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:39.314598  308083 cli_runner.go:164] Run: docker container inspect ha-480889 --format={{.State.Status}}
	I1025 10:13:39.343892  308083 host.go:66] Checking if "ha-480889" exists ...
	I1025 10:13:39.344182  308083 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889 for IP: 192.168.49.3
	I1025 10:13:39.344197  308083 certs.go:195] generating shared ca certs ...
	I1025 10:13:39.344211  308083 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:39.344335  308083 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:13:39.344393  308083 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:13:39.344406  308083 certs.go:257] generating profile certs ...
	I1025 10:13:39.344480  308083 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.key
	I1025 10:13:39.344547  308083 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key.1eaed255
	I1025 10:13:39.344593  308083 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key
	I1025 10:13:39.344606  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 10:13:39.344620  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 10:13:39.344636  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 10:13:39.344647  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 10:13:39.344663  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1025 10:13:39.344687  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1025 10:13:39.344718  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1025 10:13:39.344732  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1025 10:13:39.344792  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:13:39.344825  308083 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:13:39.344838  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:13:39.344861  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:13:39.344888  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:13:39.344914  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:13:39.344981  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:13:39.345016  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> /usr/share/ca-certificates/2612562.pem
	I1025 10:13:39.345034  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:39.345045  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem -> /usr/share/ca-certificates/261256.pem
	I1025 10:13:39.345112  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:39.371934  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:13:39.470344  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1025 10:13:39.483516  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1025 10:13:39.501845  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1025 10:13:39.507200  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1025 10:13:39.527252  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1025 10:13:39.532933  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1025 10:13:39.549399  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1025 10:13:39.554586  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1025 10:13:39.570659  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1025 10:13:39.574962  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1025 10:13:39.584673  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1025 10:13:39.589172  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1025 10:13:39.598913  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:13:39.620680  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:13:39.644461  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:13:39.668589  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:13:39.692311  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 10:13:39.712807  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:13:39.739124  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:13:39.767676  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:13:39.790850  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:13:39.811105  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:13:39.833707  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:13:39.856043  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1025 10:13:39.869628  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1025 10:13:39.883404  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1025 10:13:39.897013  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1025 10:13:39.919485  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1025 10:13:39.945523  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1025 10:13:39.967210  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1025 10:13:39.994983  308083 ssh_runner.go:195] Run: openssl version
	I1025 10:13:40.002778  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:13:40.017144  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:13:40.022850  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:13:40.022982  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:13:40.073080  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:13:40.081683  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:13:40.090847  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:13:40.096142  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:13:40.096266  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:13:40.138985  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:13:40.147554  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:13:40.156382  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:40.161029  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:40.161195  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:40.202792  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:13:40.211314  308083 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:13:40.215961  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:13:40.258002  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:13:40.301047  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:13:40.349624  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:13:40.395242  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:13:40.444494  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:13:40.496874  308083 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1025 10:13:40.496975  308083 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-480889-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:13:40.497007  308083 kube-vip.go:115] generating kube-vip config ...
	I1025 10:13:40.497062  308083 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1025 10:13:40.539654  308083 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:13:40.539717  308083 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1025 10:13:40.539780  308083 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:13:40.558469  308083 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:13:40.558603  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1025 10:13:40.566867  308083 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1025 10:13:40.583436  308083 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:13:40.596901  308083 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1025 10:13:40.612066  308083 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1025 10:13:40.616047  308083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:13:40.627164  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:40.770079  308083 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:13:40.784212  308083 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:13:40.784687  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:40.790656  308083 out.go:179] * Verifying Kubernetes components...
	I1025 10:13:40.793379  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:40.919442  308083 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:13:40.934315  308083 kapi.go:59] client config for ha-480889: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.key", CAFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1025 10:13:40.934388  308083 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1025 10:13:40.936607  308083 node_ready.go:35] waiting up to 6m0s for node "ha-480889-m02" to be "Ready" ...
	I1025 10:14:03.978798  308083 node_ready.go:49] node "ha-480889-m02" is "Ready"
	I1025 10:14:03.978827  308083 node_ready.go:38] duration metric: took 23.042187504s for node "ha-480889-m02" to be "Ready" ...
	I1025 10:14:03.978841  308083 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:14:03.978901  308083 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:14:04.002008  308083 api_server.go:72] duration metric: took 23.217688145s to wait for apiserver process to appear ...
	I1025 10:14:04.002035  308083 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:14:04.002057  308083 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 10:14:04.065805  308083 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:14:04.065839  308083 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:14:04.502158  308083 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 10:14:04.511711  308083 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:14:04.511802  308083 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:14:05.002194  308083 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 10:14:05.013361  308083 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:14:05.013506  308083 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:14:05.503134  308083 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 10:14:05.514732  308083 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1025 10:14:05.518544  308083 api_server.go:141] control plane version: v1.34.1
	I1025 10:14:05.518622  308083 api_server.go:131] duration metric: took 1.516578961s to wait for apiserver health ...
	I1025 10:14:05.518646  308083 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:14:05.535848  308083 system_pods.go:59] 26 kube-system pods found
	I1025 10:14:05.535941  308083 system_pods.go:61] "coredns-66bc5c9577-ctnsn" [4c76c01c-15ed-4930-ac1a-1e2bf7de3961] Running
	I1025 10:14:05.535963  308083 system_pods.go:61] "coredns-66bc5c9577-h4lrc" [ade89685-c5d2-4e4e-847d-7af6cb3fb862] Running
	I1025 10:14:05.535986  308083 system_pods.go:61] "etcd-ha-480889" [e343e174-731b-4eb7-97df-0220f254bfcf] Running
	I1025 10:14:05.536032  308083 system_pods.go:61] "etcd-ha-480889-m02" [52f56789-d8bf-4251-9316-a0b572f65125] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:14:05.536059  308083 system_pods.go:61] "etcd-ha-480889-m03" [7fb90646-4b60-4cc2-a527-c7e563bb182b] Running
	I1025 10:14:05.536100  308083 system_pods.go:61] "kindnet-227ts" [c2c62be9-5d6e-4a43-9eff-9a7e220282d7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 10:14:05.536125  308083 system_pods.go:61] "kindnet-2fqxj" [da4ef885-af3d-4ee3-9151-cdca0253c911] Running
	I1025 10:14:05.536154  308083 system_pods.go:61] "kindnet-8fgmd" [13833b7e-6794-4f30-8bec-20375bd481f2] Running
	I1025 10:14:05.536192  308083 system_pods.go:61] "kindnet-92p8z" [c1f4d260-381c-42d8-a8a5-77ae60cf42c6] Running
	I1025 10:14:05.536214  308083 system_pods.go:61] "kube-apiserver-ha-480889" [3f293b6b-7247-48a0-aa80-508696bea727] Running
	I1025 10:14:05.536251  308083 system_pods.go:61] "kube-apiserver-ha-480889-m02" [faae5baa-e581-4254-b659-0687cfebfb67] Running
	I1025 10:14:05.536276  308083 system_pods.go:61] "kube-apiserver-ha-480889-m03" [f18f8a4d-22bd-48e4-9b23-e5383f2fce25] Running
	I1025 10:14:05.536299  308083 system_pods.go:61] "kube-controller-manager-ha-480889" [6c111362-d576-4cb0-b102-086f180ff7b7] Running
	I1025 10:14:05.536340  308083 system_pods.go:61] "kube-controller-manager-ha-480889-m02" [443192d3-d7a3-40c4-99bf-2a1eac354f88] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:14:05.536367  308083 system_pods.go:61] "kube-controller-manager-ha-480889-m03" [c5d29ad2-f161-4c39-9de4-35916c43e02b] Running
	I1025 10:14:05.536392  308083 system_pods.go:61] "kube-proxy-29hlq" [2c0b691f-c26f-49bd-9b8b-39819ca8539d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 10:14:05.536425  308083 system_pods.go:61] "kube-proxy-4d5ks" [058d38d9-4dec-40ff-ac68-9651d27ba0c6] Running
	I1025 10:14:05.536449  308083 system_pods.go:61] "kube-proxy-6x5rb" [e73b3f75-02d7-46e3-940c-ffd727e4c87d] Running
	I1025 10:14:05.536471  308083 system_pods.go:61] "kube-proxy-9rtcs" [6fd17399-e636-4de6-aa9c-e0e3d3656c41] Running
	I1025 10:14:05.536506  308083 system_pods.go:61] "kube-scheduler-ha-480889" [9036810d-dce1-4542-ac53-b5d70020809c] Running
	I1025 10:14:05.536532  308083 system_pods.go:61] "kube-scheduler-ha-480889-m02" [f4c7c190-55e0-4bbf-9c22-fe9b3d8fc98d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:14:05.536556  308083 system_pods.go:61] "kube-scheduler-ha-480889-m03" [fdcb0331-d8b0-4fb0-9549-459e365b5863] Running
	I1025 10:14:05.536591  308083 system_pods.go:61] "kube-vip-ha-480889" [07959933-b7f0-46ad-9fa2-d9c661db7882] Running
	I1025 10:14:05.536614  308083 system_pods.go:61] "kube-vip-ha-480889-m02" [fea939ce-de9c-446b-b961-37a72c945913] Running
	I1025 10:14:05.536639  308083 system_pods.go:61] "kube-vip-ha-480889-m03" [f2a5dbed-19e6-4092-8340-c798578dfd40] Running
	I1025 10:14:05.536679  308083 system_pods.go:61] "storage-provisioner" [15113825-bb63-434f-bd5e-2ffd789452d6] Running
	I1025 10:14:05.536705  308083 system_pods.go:74] duration metric: took 18.038599ms to wait for pod list to return data ...
	I1025 10:14:05.536727  308083 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:14:05.551153  308083 default_sa.go:45] found service account: "default"
	I1025 10:14:05.551231  308083 default_sa.go:55] duration metric: took 14.469512ms for default service account to be created ...
	I1025 10:14:05.551256  308083 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:14:05.562144  308083 system_pods.go:86] 26 kube-system pods found
	I1025 10:14:05.562232  308083 system_pods.go:89] "coredns-66bc5c9577-ctnsn" [4c76c01c-15ed-4930-ac1a-1e2bf7de3961] Running
	I1025 10:14:05.562257  308083 system_pods.go:89] "coredns-66bc5c9577-h4lrc" [ade89685-c5d2-4e4e-847d-7af6cb3fb862] Running
	I1025 10:14:05.562298  308083 system_pods.go:89] "etcd-ha-480889" [e343e174-731b-4eb7-97df-0220f254bfcf] Running
	I1025 10:14:05.562329  308083 system_pods.go:89] "etcd-ha-480889-m02" [52f56789-d8bf-4251-9316-a0b572f65125] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:14:05.562357  308083 system_pods.go:89] "etcd-ha-480889-m03" [7fb90646-4b60-4cc2-a527-c7e563bb182b] Running
	I1025 10:14:05.562400  308083 system_pods.go:89] "kindnet-227ts" [c2c62be9-5d6e-4a43-9eff-9a7e220282d7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 10:14:05.562424  308083 system_pods.go:89] "kindnet-2fqxj" [da4ef885-af3d-4ee3-9151-cdca0253c911] Running
	I1025 10:14:05.562452  308083 system_pods.go:89] "kindnet-8fgmd" [13833b7e-6794-4f30-8bec-20375bd481f2] Running
	I1025 10:14:05.562486  308083 system_pods.go:89] "kindnet-92p8z" [c1f4d260-381c-42d8-a8a5-77ae60cf42c6] Running
	I1025 10:14:05.562513  308083 system_pods.go:89] "kube-apiserver-ha-480889" [3f293b6b-7247-48a0-aa80-508696bea727] Running
	I1025 10:14:05.562563  308083 system_pods.go:89] "kube-apiserver-ha-480889-m02" [faae5baa-e581-4254-b659-0687cfebfb67] Running
	I1025 10:14:05.562590  308083 system_pods.go:89] "kube-apiserver-ha-480889-m03" [f18f8a4d-22bd-48e4-9b23-e5383f2fce25] Running
	I1025 10:14:05.562616  308083 system_pods.go:89] "kube-controller-manager-ha-480889" [6c111362-d576-4cb0-b102-086f180ff7b7] Running
	I1025 10:14:05.562658  308083 system_pods.go:89] "kube-controller-manager-ha-480889-m02" [443192d3-d7a3-40c4-99bf-2a1eac354f88] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:14:05.562685  308083 system_pods.go:89] "kube-controller-manager-ha-480889-m03" [c5d29ad2-f161-4c39-9de4-35916c43e02b] Running
	I1025 10:14:05.562729  308083 system_pods.go:89] "kube-proxy-29hlq" [2c0b691f-c26f-49bd-9b8b-39819ca8539d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 10:14:05.562755  308083 system_pods.go:89] "kube-proxy-4d5ks" [058d38d9-4dec-40ff-ac68-9651d27ba0c6] Running
	I1025 10:14:05.562843  308083 system_pods.go:89] "kube-proxy-6x5rb" [e73b3f75-02d7-46e3-940c-ffd727e4c87d] Running
	I1025 10:14:05.562883  308083 system_pods.go:89] "kube-proxy-9rtcs" [6fd17399-e636-4de6-aa9c-e0e3d3656c41] Running
	I1025 10:14:05.562903  308083 system_pods.go:89] "kube-scheduler-ha-480889" [9036810d-dce1-4542-ac53-b5d70020809c] Running
	I1025 10:14:05.562928  308083 system_pods.go:89] "kube-scheduler-ha-480889-m02" [f4c7c190-55e0-4bbf-9c22-fe9b3d8fc98d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:14:05.562965  308083 system_pods.go:89] "kube-scheduler-ha-480889-m03" [fdcb0331-d8b0-4fb0-9549-459e365b5863] Running
	I1025 10:14:05.562991  308083 system_pods.go:89] "kube-vip-ha-480889" [07959933-b7f0-46ad-9fa2-d9c661db7882] Running
	I1025 10:14:05.563016  308083 system_pods.go:89] "kube-vip-ha-480889-m02" [fea939ce-de9c-446b-b961-37a72c945913] Running
	I1025 10:14:05.563070  308083 system_pods.go:89] "kube-vip-ha-480889-m03" [f2a5dbed-19e6-4092-8340-c798578dfd40] Running
	I1025 10:14:05.563096  308083 system_pods.go:89] "storage-provisioner" [15113825-bb63-434f-bd5e-2ffd789452d6] Running
	I1025 10:14:05.563122  308083 system_pods.go:126] duration metric: took 11.844458ms to wait for k8s-apps to be running ...
	I1025 10:14:05.563161  308083 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:14:05.563251  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:14:05.583878  308083 system_svc.go:56] duration metric: took 20.700093ms WaitForService to wait for kubelet
	I1025 10:14:05.583959  308083 kubeadm.go:586] duration metric: took 24.799662385s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:14:05.584013  308083 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:14:05.602014  308083 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:14:05.602101  308083 node_conditions.go:123] node cpu capacity is 2
	I1025 10:14:05.602129  308083 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:14:05.602149  308083 node_conditions.go:123] node cpu capacity is 2
	I1025 10:14:05.602183  308083 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:14:05.602208  308083 node_conditions.go:123] node cpu capacity is 2
	I1025 10:14:05.602232  308083 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:14:05.602268  308083 node_conditions.go:123] node cpu capacity is 2
	I1025 10:14:05.602294  308083 node_conditions.go:105] duration metric: took 18.245402ms to run NodePressure ...
	I1025 10:14:05.602322  308083 start.go:241] waiting for startup goroutines ...
	I1025 10:14:05.602372  308083 start.go:255] writing updated cluster config ...
	I1025 10:14:05.606107  308083 out.go:203] 
	I1025 10:14:05.609375  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:14:05.609570  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:14:05.612923  308083 out.go:179] * Starting "ha-480889-m03" control-plane node in "ha-480889" cluster
	I1025 10:14:05.616650  308083 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:14:05.619578  308083 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:14:05.622647  308083 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:14:05.622723  308083 cache.go:58] Caching tarball of preloaded images
	I1025 10:14:05.622730  308083 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:14:05.622888  308083 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:14:05.622906  308083 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:14:05.623058  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:14:05.644689  308083 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:14:05.644714  308083 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:14:05.644728  308083 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:14:05.644760  308083 start.go:360] acquireMachinesLock for ha-480889-m03: {Name:mkdc7aead07cc61c4483ca641c0f901f32cc9e0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:14:05.644832  308083 start.go:364] duration metric: took 40.6µs to acquireMachinesLock for "ha-480889-m03"
	I1025 10:14:05.644859  308083 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:14:05.644869  308083 fix.go:54] fixHost starting: m03
	I1025 10:14:05.645136  308083 cli_runner.go:164] Run: docker container inspect ha-480889-m03 --format={{.State.Status}}
	I1025 10:14:05.665455  308083 fix.go:112] recreateIfNeeded on ha-480889-m03: state=Stopped err=<nil>
	W1025 10:14:05.665482  308083 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:14:05.668964  308083 out.go:252] * Restarting existing docker container for "ha-480889-m03" ...
	I1025 10:14:05.669067  308083 cli_runner.go:164] Run: docker start ha-480889-m03
	I1025 10:14:06.010869  308083 cli_runner.go:164] Run: docker container inspect ha-480889-m03 --format={{.State.Status}}
	I1025 10:14:06.033631  308083 kic.go:430] container "ha-480889-m03" state is running.
	I1025 10:14:06.034025  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m03
	I1025 10:14:06.062398  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:14:06.062842  308083 machine.go:93] provisionDockerMachine start ...
	I1025 10:14:06.062924  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:06.096711  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:14:06.097013  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1025 10:14:06.097022  308083 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:14:06.100286  308083 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44394->127.0.0.1:33183: read: connection reset by peer
	I1025 10:14:09.422447  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-480889-m03
	
	I1025 10:14:09.422528  308083 ubuntu.go:182] provisioning hostname "ha-480889-m03"
	I1025 10:14:09.422611  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:09.454682  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:14:09.454994  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1025 10:14:09.455005  308083 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-480889-m03 && echo "ha-480889-m03" | sudo tee /etc/hostname
	I1025 10:14:09.716055  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-480889-m03
	
	I1025 10:14:09.716202  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:09.758198  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:14:09.758502  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1025 10:14:09.758518  308083 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-480889-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-480889-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-480889-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:14:09.952740  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:14:09.952771  308083 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:14:09.952843  308083 ubuntu.go:190] setting up certificates
	I1025 10:14:09.952854  308083 provision.go:84] configureAuth start
	I1025 10:14:09.952966  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m03
	I1025 10:14:10.002091  308083 provision.go:143] copyHostCerts
	I1025 10:14:10.002146  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:14:10.002194  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:14:10.002207  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:14:10.002336  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:14:10.002445  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:14:10.002473  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:14:10.002482  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:14:10.002512  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:14:10.002620  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:14:10.002645  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:14:10.002656  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:14:10.002686  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:14:10.002748  308083 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.ha-480889-m03 san=[127.0.0.1 192.168.49.4 ha-480889-m03 localhost minikube]
	I1025 10:14:10.250973  308083 provision.go:177] copyRemoteCerts
	I1025 10:14:10.251332  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:14:10.251408  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:10.289237  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m03/id_rsa Username:docker}
	I1025 10:14:10.436731  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 10:14:10.436797  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 10:14:10.544747  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 10:14:10.544817  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:14:10.630377  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 10:14:10.630464  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:14:10.673862  308083 provision.go:87] duration metric: took 720.988399ms to configureAuth
	I1025 10:14:10.673890  308083 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:14:10.674168  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:14:10.674521  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:10.707641  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:14:10.707938  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1025 10:14:10.707957  308083 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:14:11.154845  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:14:11.154927  308083 machine.go:96] duration metric: took 5.092069874s to provisionDockerMachine
	I1025 10:14:11.154954  308083 start.go:293] postStartSetup for "ha-480889-m03" (driver="docker")
	I1025 10:14:11.154994  308083 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:14:11.155090  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:14:11.155169  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:11.175592  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m03/id_rsa Username:docker}
	I1025 10:14:11.283365  308083 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:14:11.286806  308083 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:14:11.286877  308083 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:14:11.286905  308083 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:14:11.286994  308083 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:14:11.287123  308083 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:14:11.287171  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> /etc/ssl/certs/2612562.pem
	I1025 10:14:11.287295  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:14:11.295059  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:14:11.316093  308083 start.go:296] duration metric: took 161.095107ms for postStartSetup
	I1025 10:14:11.316217  308083 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:14:11.316276  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:11.333862  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m03/id_rsa Username:docker}
	I1025 10:14:11.435204  308083 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:14:11.440180  308083 fix.go:56] duration metric: took 5.79530454s for fixHost
	I1025 10:14:11.440241  308083 start.go:83] releasing machines lock for "ha-480889-m03", held for 5.795361279s
	I1025 10:14:11.440311  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m03
	I1025 10:14:11.464304  308083 out.go:179] * Found network options:
	I1025 10:14:11.467314  308083 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1025 10:14:11.470389  308083 proxy.go:120] fail to check proxy env: Error ip not in block
	W1025 10:14:11.470430  308083 proxy.go:120] fail to check proxy env: Error ip not in block
	W1025 10:14:11.470457  308083 proxy.go:120] fail to check proxy env: Error ip not in block
	W1025 10:14:11.470474  308083 proxy.go:120] fail to check proxy env: Error ip not in block
	I1025 10:14:11.470546  308083 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:14:11.470610  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:11.470888  308083 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:14:11.470954  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:11.492648  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m03/id_rsa Username:docker}
	I1025 10:14:11.500283  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m03/id_rsa Username:docker}
	I1025 10:14:11.796571  308083 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:14:11.919974  308083 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:14:11.920047  308083 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:14:11.930959  308083 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:14:11.931034  308083 start.go:495] detecting cgroup driver to use...
	I1025 10:14:11.931084  308083 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:14:11.931150  308083 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:14:11.976106  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:14:12.014574  308083 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:14:12.014688  308083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:14:12.063668  308083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:14:12.091979  308083 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:14:12.314959  308083 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:14:12.575887  308083 docker.go:234] disabling docker service ...
	I1025 10:14:12.575989  308083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:14:12.601545  308083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:14:12.619323  308083 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:14:12.867377  308083 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:14:13.108726  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:14:13.127994  308083 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:14:13.145943  308083 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:14:13.146033  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.156671  308083 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:14:13.156750  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.168655  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.184089  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.194894  308083 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:14:13.204315  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.214077  308083 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.224397  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.234566  308083 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:14:13.243678  308083 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:14:13.253013  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:14:13.493138  308083 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:15:43.813681  308083 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.320502184s)
	I1025 10:15:43.813712  308083 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:15:43.813771  308083 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:15:43.818284  308083 start.go:563] Will wait 60s for crictl version
	I1025 10:15:43.818348  308083 ssh_runner.go:195] Run: which crictl
	I1025 10:15:43.822612  308083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:15:43.849591  308083 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:15:43.849679  308083 ssh_runner.go:195] Run: crio --version
	I1025 10:15:43.881155  308083 ssh_runner.go:195] Run: crio --version
	I1025 10:15:43.916090  308083 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:15:43.919321  308083 out.go:179]   - env NO_PROXY=192.168.49.2
	I1025 10:15:43.922326  308083 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1025 10:15:43.925259  308083 cli_runner.go:164] Run: docker network inspect ha-480889 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:15:43.954223  308083 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 10:15:43.958732  308083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:15:43.969465  308083 mustload.go:65] Loading cluster: ha-480889
	I1025 10:15:43.969714  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:15:43.969954  308083 cli_runner.go:164] Run: docker container inspect ha-480889 --format={{.State.Status}}
	I1025 10:15:43.987361  308083 host.go:66] Checking if "ha-480889" exists ...
	I1025 10:15:43.987646  308083 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889 for IP: 192.168.49.4
	I1025 10:15:43.987660  308083 certs.go:195] generating shared ca certs ...
	I1025 10:15:43.987675  308083 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:15:43.987792  308083 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:15:43.987838  308083 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:15:43.987850  308083 certs.go:257] generating profile certs ...
	I1025 10:15:43.987924  308083 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.key
	I1025 10:15:43.987987  308083 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key.7d4a26e1
	I1025 10:15:43.988022  308083 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key
	I1025 10:15:43.988030  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 10:15:43.988044  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 10:15:43.988056  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 10:15:43.988066  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 10:15:43.988076  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1025 10:15:43.988088  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1025 10:15:43.988099  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1025 10:15:43.988111  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1025 10:15:43.988160  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:15:43.988188  308083 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:15:43.988197  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:15:43.988222  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:15:43.988244  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:15:43.988266  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:15:43.988306  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:15:43.988330  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> /usr/share/ca-certificates/2612562.pem
	I1025 10:15:43.988342  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:15:43.988353  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem -> /usr/share/ca-certificates/261256.pem
	I1025 10:15:43.988408  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:15:44.012522  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:15:44.114325  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1025 10:15:44.118993  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1025 10:15:44.127630  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1025 10:15:44.131303  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1025 10:15:44.140046  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1025 10:15:44.144492  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1025 10:15:44.154086  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1025 10:15:44.158181  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1025 10:15:44.167518  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1025 10:15:44.171723  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1025 10:15:44.181427  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1025 10:15:44.185332  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1025 10:15:44.194266  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:15:44.214098  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:15:44.234054  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:15:44.256195  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:15:44.279031  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 10:15:44.299344  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:15:44.323793  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:15:44.345417  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:15:44.365719  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:15:44.388245  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:15:44.408144  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:15:44.428098  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1025 10:15:44.441938  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1025 10:15:44.457102  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1025 10:15:44.471357  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1025 10:15:44.485615  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1025 10:15:44.498465  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1025 10:15:44.511910  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1025 10:15:44.531258  308083 ssh_runner.go:195] Run: openssl version
	I1025 10:15:44.540606  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:15:44.550354  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:15:44.554246  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:15:44.554361  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:15:44.602272  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:15:44.611902  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:15:44.622835  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:15:44.629226  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:15:44.629299  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:15:44.670883  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:15:44.679524  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:15:44.689802  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:15:44.693893  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:15:44.694068  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:15:44.735651  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:15:44.743736  308083 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:15:44.747896  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:15:44.790110  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:15:44.832406  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:15:44.874662  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:15:44.915849  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:15:44.959092  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:15:45.002430  308083 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1025 10:15:45.002579  308083 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-480889-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:15:45.002609  308083 kube-vip.go:115] generating kube-vip config ...
	I1025 10:15:45.002683  308083 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1025 10:15:45.029854  308083 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:15:45.029925  308083 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1025 10:15:45.030057  308083 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:15:45.063539  308083 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:15:45.063684  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1025 10:15:45.095087  308083 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1025 10:15:45.131847  308083 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:15:45.152140  308083 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1025 10:15:45.177067  308083 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1025 10:15:45.183642  308083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:15:45.224794  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:15:45.476283  308083 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:15:45.492420  308083 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:15:45.492955  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:15:45.496345  308083 out.go:179] * Verifying Kubernetes components...
	I1025 10:15:45.499247  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:15:45.679197  308083 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:15:45.698347  308083 kapi.go:59] client config for ha-480889: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.key", CAFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1025 10:15:45.698425  308083 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1025 10:15:45.698682  308083 node_ready.go:35] waiting up to 6m0s for node "ha-480889-m03" to be "Ready" ...
	W1025 10:15:47.704097  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:15:50.202392  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:15:52.202756  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:15:54.702519  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:15:56.703376  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:15:59.202538  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:01.203022  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:03.702456  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:05.702876  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:08.203621  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:10.702751  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:12.702907  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:14.703027  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:17.202640  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:19.702153  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:21.702531  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:24.202537  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:26.203031  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:28.703368  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:31.203812  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:33.702780  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:35.702906  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:38.202338  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:40.203167  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:42.702490  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:44.702835  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:47.202196  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:49.202526  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:51.202870  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:53.702683  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:55.703197  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:58.202338  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:00.206336  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:02.702377  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:04.702956  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:06.703174  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:09.203441  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:11.702806  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:13.710569  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:16.202672  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:18.702234  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:20.702880  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:23.202095  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:25.702246  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:28.202837  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:30.702454  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:33.202247  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:35.202785  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:37.203762  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:39.204260  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:41.702127  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:43.702287  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:45.703093  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:48.201849  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:50.202459  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:52.702854  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:55.202331  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:57.203185  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:59.702528  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:01.703331  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:04.202726  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:06.703053  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:09.204373  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:11.701897  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:13.702021  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:15.702198  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:17.702881  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:19.703383  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:22.202517  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:24.702492  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:26.702790  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:28.703087  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:30.703165  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:33.201850  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:35.202988  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:37.702399  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:40.203419  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:42.706153  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:45.204630  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:47.702150  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:49.702926  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:52.202337  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:54.202520  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:56.205653  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:58.703493  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:01.202899  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:03.703116  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:06.202631  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:08.702752  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:11.202319  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:13.202983  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:15.702637  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:17.702727  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:19.703012  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:22.202503  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:24.202611  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:26.203025  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:28.703023  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:30.704166  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:33.202334  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:35.202526  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:37.702970  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:40.202164  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:42.209403  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:44.702793  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:47.202837  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:49.203091  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:51.702900  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:54.202265  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:56.202768  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:58.701910  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:00.709849  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:03.202825  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:05.202888  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:07.203134  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:09.702506  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:11.702866  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:13.703432  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:16.203193  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:18.702327  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:21.202730  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:23.701909  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:25.702331  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:28.203063  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:30.702507  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:33.202600  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:35.701600  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:37.705510  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:40.203102  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:42.203440  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:44.702386  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:46.703047  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:49.202068  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:51.202968  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:53.701922  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:56.202876  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:58.702653  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:00.702696  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:03.202988  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:05.702469  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:08.203875  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:10.702312  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:12.702439  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:15.202672  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:17.203173  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:19.702749  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:22.202730  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:24.211889  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:26.702317  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:28.703052  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:31.202771  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:33.702614  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:35.702649  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:37.702884  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:40.202737  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:42.203329  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:44.702141  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	I1025 10:21:45.699723  308083 node_ready.go:38] duration metric: took 6m0.00101372s for node "ha-480889-m03" to be "Ready" ...
	I1025 10:21:45.702936  308083 out.go:203] 
	W1025 10:21:45.705812  308083 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1025 10:21:45.705837  308083 out.go:285] * 
	W1025 10:21:45.708064  308083 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 10:21:45.711065  308083 out.go:203] 
	
	
	==> CRI-O <==
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.872198639Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=aa627551-b4d5-499e-bdd6-7970bf78bb8e name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.8732321Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=3a445103-3339-479a-b20e-1c8b913f81b0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.873332589Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.879036798Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.879394684Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b711fedb8e2618cc0f4b880fad10f4bf8b29d19e8ac5c5fbc1ffc64bd2f05ae5/merged/etc/passwd: no such file or directory"
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.879492153Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b711fedb8e2618cc0f4b880fad10f4bf8b29d19e8ac5c5fbc1ffc64bd2f05ae5/merged/etc/group: no such file or directory"
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.879805058Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.90911303Z" level=info msg="Created container 259b995f91b9c68705817e45cb74e856232ab4b1d45cae1a557d2406942ace53: kube-system/storage-provisioner/storage-provisioner" id=3a445103-3339-479a-b20e-1c8b913f81b0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.910291953Z" level=info msg="Starting container: 259b995f91b9c68705817e45cb74e856232ab4b1d45cae1a557d2406942ace53" id=376239e4-87b0-44a9-9df0-3c3e5353824a name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.917558356Z" level=info msg="Started container" PID=1402 containerID=259b995f91b9c68705817e45cb74e856232ab4b1d45cae1a557d2406942ace53 description=kube-system/storage-provisioner/storage-provisioner id=376239e4-87b0-44a9-9df0-3c3e5353824a name=/runtime.v1.RuntimeService/StartContainer sandboxID=088d0d7b8bf0c2f621c0ae22566dca0cf1d81367602172bfbbd843248aea9931
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.555322496Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.559161147Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.559205784Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.559228766Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.563121449Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.563158898Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.563183087Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.573678291Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.573872779Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.574376406Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.578374066Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.578414764Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.578445763Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.582936063Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.582996642Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	259b995f91b9c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Running             storage-provisioner       2                   088d0d7b8bf0c       storage-provisioner                 kube-system
	3d17e8c3e629c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   7 minutes ago       Running             kube-controller-manager   3                   8cbe108c8dc1a       kube-controller-manager-ha-480889   kube-system
	8a6f8ac4178b1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   a4778b8bb50e2       coredns-66bc5c9577-h4lrc            kube-system
	2c07e2732f356       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 minutes ago       Running             kube-proxy                1                   221f4b21ed8c2       kube-proxy-6x5rb                    kube-system
	8b6196b876372       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   1                   8039291c91840       busybox-7b57f96db7-wkwwg            default
	15568eac2b869       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   badf118cbd9c1       coredns-66bc5c9577-ctnsn            kube-system
	9e45eacfcf479       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Exited              storage-provisioner       1                   088d0d7b8bf0c       storage-provisioner                 kube-system
	fbcc0424a1c5f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               1                   c3c6117dfa2fc       kindnet-8fgmd                       kube-system
	3d23dbb42715f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   8 minutes ago       Exited              kube-controller-manager   2                   8cbe108c8dc1a       kube-controller-manager-ha-480889   kube-system
	07e7673199f69       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Running             etcd                      1                   f30b7eb202966       etcd-ha-480889                      kube-system
	0b2d139004d5a       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago       Running             kube-vip                  0                   dfd777c7213ec       kube-vip-ha-480889                  kube-system
	322c2cc726dbd       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            1                   b85982ed8ef84       kube-scheduler-ha-480889            kube-system
	170a3a9364b50       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   8 minutes ago       Running             kube-apiserver            1                   f6ee90b1515bb       kube-apiserver-ha-480889            kube-system
	
	
	==> coredns [15568eac2b869838ebb71f6d12525ec66bc41f9aa490cf1a68c490999f19b9d6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56266 - 39256 "HINFO IN 6126263590743240156.8598032974753550859. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030490651s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [8a6f8ac4178b104f0091791bd890925441e209f21434df4df270395089143c26] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56215 - 36101 "HINFO IN 38725101095574367.261866642865519352. udp 54 false 512" NXDOMAIN qr,rd,ra 54 0.011735961s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-480889
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-480889
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=ha-480889
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_07_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:07:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-480889
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:21:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:21:12 +0000   Sat, 25 Oct 2025 10:07:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:21:12 +0000   Sat, 25 Oct 2025 10:07:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:21:12 +0000   Sat, 25 Oct 2025 10:07:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:21:12 +0000   Sat, 25 Oct 2025 10:14:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-480889
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                8216dfdd-af7a-457f-ad51-df588b2f2c14
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-wkwwg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-ctnsn             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 coredns-66bc5c9577-h4lrc             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-ha-480889                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-8fgmd                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-480889             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-480889    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-6x5rb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-480889             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-480889                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m49s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   Starting                 7m47s                  kube-proxy       
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-480889 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)      kubelet          Node ha-480889 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-480889 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-480889 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-480889 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-480889 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                    node-controller  Node ha-480889 event: Registered Node ha-480889 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-480889 event: Registered Node ha-480889 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-480889 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-480889 event: Registered Node ha-480889 in Controller
	  Normal   NodeHasSufficientMemory  8m27s (x8 over 8m27s)  kubelet          Node ha-480889 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m27s (x8 over 8m27s)  kubelet          Node ha-480889 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m27s (x8 over 8m27s)  kubelet          Node ha-480889 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m44s                  node-controller  Node ha-480889 event: Registered Node ha-480889 in Controller
	  Normal   RegisteredNode           7m32s                  node-controller  Node ha-480889 event: Registered Node ha-480889 in Controller
	
	
	Name:               ha-480889-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-480889-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=ha-480889
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_25T10_08_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:08:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-480889-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:21:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:21:49 +0000   Sat, 25 Oct 2025 10:08:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:21:49 +0000   Sat, 25 Oct 2025 10:08:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:21:49 +0000   Sat, 25 Oct 2025 10:08:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:21:49 +0000   Sat, 25 Oct 2025 10:09:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-480889-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                ff971242-1f4f-45cd-b767-f92823ae34e7
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-cmlf6                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-480889-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-227ts                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-480889-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-480889-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-29hlq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-480889-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-480889-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m38s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   CIDRAssignmentFailed     13m                    cidrAllocator    Node ha-480889-m02 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           13m                    node-controller  Node ha-480889-m02 event: Registered Node ha-480889-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-480889-m02 event: Registered Node ha-480889-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-480889-m02 event: Registered Node ha-480889-m02 in Controller
	  Normal   NodeHasSufficientPID     9m28s (x8 over 9m28s)  kubelet          Node ha-480889-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 9m28s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m28s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m28s (x8 over 9m28s)  kubelet          Node ha-480889-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m28s (x8 over 9m28s)  kubelet          Node ha-480889-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 8m22s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m22s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m22s (x8 over 8m22s)  kubelet          Node ha-480889-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m22s (x8 over 8m22s)  kubelet          Node ha-480889-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m22s (x8 over 8m22s)  kubelet          Node ha-480889-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m44s                  node-controller  Node ha-480889-m02 event: Registered Node ha-480889-m02 in Controller
	  Normal   RegisteredNode           7m32s                  node-controller  Node ha-480889-m02 event: Registered Node ha-480889-m02 in Controller
	
	
	Name:               ha-480889-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-480889-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=ha-480889
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_25T10_11_08_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:11:07 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-480889-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:12:49 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 25 Oct 2025 10:11:49 +0000   Sat, 25 Oct 2025 10:15:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 25 Oct 2025 10:11:49 +0000   Sat, 25 Oct 2025 10:15:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 25 Oct 2025 10:11:49 +0000   Sat, 25 Oct 2025 10:15:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 25 Oct 2025 10:11:49 +0000   Sat, 25 Oct 2025 10:15:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-480889-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                cf43d700-f979-45ff-9dc8-5f80581e56db
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2fqxj       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-9rtcs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-480889-m04 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-480889-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-480889-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-480889-m04 event: Registered Node ha-480889-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-480889-m04 event: Registered Node ha-480889-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-480889-m04 event: Registered Node ha-480889-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-480889-m04 status is now: NodeReady
	  Normal   RegisteredNode           7m44s              node-controller  Node ha-480889-m04 event: Registered Node ha-480889-m04 in Controller
	  Normal   RegisteredNode           7m32s              node-controller  Node ha-480889-m04 event: Registered Node ha-480889-m04 in Controller
	  Normal   NodeNotReady             6m54s              node-controller  Node ha-480889-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Oct25 09:34] overlayfs: idmapped layers are currently not supported
	[ +28.289706] overlayfs: idmapped layers are currently not supported
	[Oct25 09:35] overlayfs: idmapped layers are currently not supported
	[Oct25 09:36] overlayfs: idmapped layers are currently not supported
	[ +24.160248] overlayfs: idmapped layers are currently not supported
	[Oct25 09:37] overlayfs: idmapped layers are currently not supported
	[  +8.216028] overlayfs: idmapped layers are currently not supported
	[Oct25 09:38] overlayfs: idmapped layers are currently not supported
	[Oct25 09:39] overlayfs: idmapped layers are currently not supported
	[Oct25 09:41] overlayfs: idmapped layers are currently not supported
	[ +14.126672] overlayfs: idmapped layers are currently not supported
	[Oct25 09:42] overlayfs: idmapped layers are currently not supported
	[Oct25 09:43] overlayfs: idmapped layers are currently not supported
	[Oct25 09:45] kauditd_printk_skb: 8 callbacks suppressed
	[Oct25 09:47] overlayfs: idmapped layers are currently not supported
	[Oct25 09:53] overlayfs: idmapped layers are currently not supported
	[Oct25 09:54] overlayfs: idmapped layers are currently not supported
	[Oct25 10:07] overlayfs: idmapped layers are currently not supported
	[Oct25 10:08] overlayfs: idmapped layers are currently not supported
	[Oct25 10:09] overlayfs: idmapped layers are currently not supported
	[Oct25 10:11] overlayfs: idmapped layers are currently not supported
	[Oct25 10:12] overlayfs: idmapped layers are currently not supported
	[Oct25 10:13] overlayfs: idmapped layers are currently not supported
	[  +4.737500] overlayfs: idmapped layers are currently not supported
	[Oct25 10:14] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [07e7673199f69cfda9e91af2a66aad345a2ce7a92130398dd12fc4e17470e088] <==
	{"level":"warn","ts":"2025-10-25T10:21:34.910075Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"192220adada3ae40","rtt":"70.711228ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:34.910094Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"192220adada3ae40","rtt":"70.888112ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:38.588624Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"192220adada3ae40","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:38.588686Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"192220adada3ae40","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:39.911123Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"192220adada3ae40","rtt":"70.888112ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:39.911133Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"192220adada3ae40","rtt":"70.711228ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:42.590215Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"192220adada3ae40","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:42.590268Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"192220adada3ae40","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:44.911813Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"192220adada3ae40","rtt":"70.711228ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:44.911830Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"192220adada3ae40","rtt":"70.888112ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:46.592415Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"192220adada3ae40","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:46.592493Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"192220adada3ae40","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:49.641017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:43688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:21:49.666847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:43690","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T10:21:49.724815Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(112157709692785404 12593026477526642892)"}
	{"level":"info","ts":"2025-10-25T10:21:49.726949Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"192220adada3ae40","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-10-25T10:21:49.727076Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"192220adada3ae40"}
	{"level":"info","ts":"2025-10-25T10:21:49.727124Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"192220adada3ae40"}
	{"level":"info","ts":"2025-10-25T10:21:49.727194Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"192220adada3ae40"}
	{"level":"info","ts":"2025-10-25T10:21:49.727248Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"192220adada3ae40"}
	{"level":"info","ts":"2025-10-25T10:21:49.727316Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"192220adada3ae40"}
	{"level":"info","ts":"2025-10-25T10:21:49.727359Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"192220adada3ae40"}
	{"level":"info","ts":"2025-10-25T10:21:49.727410Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"192220adada3ae40"}
	{"level":"info","ts":"2025-10-25T10:21:49.727459Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"192220adada3ae40"}
	{"level":"info","ts":"2025-10-25T10:21:49.727532Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"192220adada3ae40"}
	
	
	==> kernel <==
	 10:21:56 up  2:04,  0 user,  load average: 1.99, 1.65, 1.68
	Linux ha-480889 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fbcc0424a1c5f8864ade5ed9949267a842ff3cf9126f862facc9e1aa5eacffff] <==
	I1025 10:21:17.552499       1 main.go:324] Node ha-480889-m04 has CIDR [10.244.4.0/24] 
	I1025 10:21:27.558068       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 10:21:27.558104       1 main.go:301] handling current node
	I1025 10:21:27.558122       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1025 10:21:27.558129       1 main.go:324] Node ha-480889-m02 has CIDR [10.244.1.0/24] 
	I1025 10:21:27.558590       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1025 10:21:27.558620       1 main.go:324] Node ha-480889-m03 has CIDR [10.244.3.0/24] 
	I1025 10:21:27.558936       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1025 10:21:27.559025       1 main.go:324] Node ha-480889-m04 has CIDR [10.244.4.0/24] 
	I1025 10:21:37.558162       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 10:21:37.558266       1 main.go:301] handling current node
	I1025 10:21:37.558288       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1025 10:21:37.558296       1 main.go:324] Node ha-480889-m02 has CIDR [10.244.1.0/24] 
	I1025 10:21:37.558464       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1025 10:21:37.558476       1 main.go:324] Node ha-480889-m03 has CIDR [10.244.3.0/24] 
	I1025 10:21:37.558535       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1025 10:21:37.558545       1 main.go:324] Node ha-480889-m04 has CIDR [10.244.4.0/24] 
	I1025 10:21:47.554097       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1025 10:21:47.554168       1 main.go:324] Node ha-480889-m03 has CIDR [10.244.3.0/24] 
	I1025 10:21:47.554480       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1025 10:21:47.554498       1 main.go:324] Node ha-480889-m04 has CIDR [10.244.4.0/24] 
	I1025 10:21:47.554611       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 10:21:47.554627       1 main.go:301] handling current node
	I1025 10:21:47.554640       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1025 10:21:47.554645       1 main.go:324] Node ha-480889-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [170a3a9364b5079051bd3c5c594733a45ac4ddd6193638cc413453308f5c0fac] <==
	I1025 10:14:03.967743       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 10:14:03.967829       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 10:14:03.973267       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:14:04.026613       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:14:04.032481       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 10:14:04.052238       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 10:14:04.052324       1 policy_source.go:240] refreshing policies
	I1025 10:14:04.058097       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:14:04.061386       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 10:14:04.072018       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:14:04.072031       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 10:14:04.084688       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	W1025 10:14:04.098372       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1025 10:14:04.099889       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:14:04.119656       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 10:14:04.126503       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:14:04.130630       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 10:14:04.130701       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:14:04.131677       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1025 10:14:04.134712       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	W1025 10:14:05.447579       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1025 10:14:06.750594       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:14:38.917463       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:14:48.841975       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:15:01.983934       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [3d17e8c3e629ce1a8cc189e9334fe0f0ede8346a9b11bb7ab70d582f3df753dd] <==
	I1025 10:14:23.281949       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 10:14:23.286288       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 10:14:23.286403       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:14:23.299410       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 10:14:23.300876       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 10:14:23.306254       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:14:23.306381       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-480889-m04"
	I1025 10:14:23.311210       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 10:14:23.312754       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 10:14:23.313149       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 10:14:23.313534       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-480889-m02"
	I1025 10:14:23.313587       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-480889-m03"
	I1025 10:14:23.313614       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-480889-m04"
	I1025 10:14:23.313645       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-480889"
	I1025 10:14:23.313684       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 10:14:23.317005       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:14:23.344025       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:14:23.367963       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:14:23.368079       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:14:23.368095       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:14:38.895946       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-q2vqt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-q2vqt\": the object has been modified; please apply your changes to the latest version and try again"
	I1025 10:14:38.896149       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"f9cd3e42-b9dd-4a9e-9497-cb7c76655b63", APIVersion:"v1", ResourceVersion:"304", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-q2vqt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-q2vqt": the object has been modified; please apply your changes to the latest version and try again
	I1025 10:14:48.851349       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-q2vqt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-q2vqt\": the object has been modified; please apply your changes to the latest version and try again"
	I1025 10:14:48.851413       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"f9cd3e42-b9dd-4a9e-9497-cb7c76655b63", APIVersion:"v1", ResourceVersion:"304", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-q2vqt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-q2vqt": the object has been modified; please apply your changes to the latest version and try again
	I1025 10:20:12.113935       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-gzkw5"
	
	
	==> kube-controller-manager [3d23dbb42715f1bb9050f4885b532a8877bf1805090fe8f7db8038e263d7391a] <==
	I1025 10:13:51.570432       1 serving.go:386] Generated self-signed cert in-memory
	I1025 10:13:52.993227       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1025 10:13:52.993297       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:13:52.996751       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1025 10:13:52.996842       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1025 10:13:52.996860       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1025 10:13:52.996872       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1025 10:14:03.024456       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [2c07e2732f356b8a475ac49d8754bc57a66b40d6244caf09ba433eb3a403de55] <==
	I1025 10:14:07.748150       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:14:08.033949       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:14:08.140133       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:14:08.140226       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 10:14:08.140327       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:14:08.195750       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:14:08.196249       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:14:08.211646       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:14:08.212020       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:14:08.212082       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:14:08.213291       1 config.go:200] "Starting service config controller"
	I1025 10:14:08.217712       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:14:08.217781       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:14:08.217809       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:14:08.217847       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:14:08.217874       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:14:08.218634       1 config.go:309] "Starting node config controller"
	I1025 10:14:08.218703       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:14:08.218734       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:14:08.317931       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 10:14:08.318088       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:14:08.318104       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [322c2cc726dbd336dc6d64af52ed0d7374e34249ef33e160f4bc633c2590c50d] <==
	E1025 10:13:49.391976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:13:49.571750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 10:13:50.070254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 10:13:50.309013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 10:13:50.582259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:13:54.319884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:13:55.549322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:13:55.831850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 10:13:56.712002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 10:13:56.744458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 10:13:56.861512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 10:13:57.774322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:13:58.224119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 10:13:58.380289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 10:13:58.672474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:13:58.770260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:13:58.898191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 10:13:59.065845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 10:13:59.239221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 10:13:59.371086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 10:13:59.586515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 10:13:59.607070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 10:14:00.498025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:14:01.364396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1025 10:14:19.077422       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.542088     798 apiserver.go:52] "Watching apiserver"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.543114     798 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.562505     798 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-480889" podUID="07959933-b7f0-46ad-9fa2-d9c661db7882"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.565354     798 scope.go:117] "RemoveContainer" containerID="3d23dbb42715f1bb9050f4885b532a8877bf1805090fe8f7db8038e263d7391a"
	Oct 25 10:14:06 ha-480889 kubelet[798]: E1025 10:14:06.567060     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-480889_kube-system(9a81b87b3b974d940626f18d45a6aab1)\"" pod="kube-system/kube-controller-manager-ha-480889" podUID="9a81b87b3b974d940626f18d45a6aab1"
	Oct 25 10:14:06 ha-480889 kubelet[798]: E1025 10:14:06.615909     798 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-480889\" already exists" pod="kube-system/etcd-ha-480889"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.623931     798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f73a13738b45c11bf39c58ec6843885" path="/var/lib/kubelet/pods/4f73a13738b45c11bf39c58ec6843885/volumes"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.638563     798 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.700998     798 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-480889"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.701172     798 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-480889"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.705440     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/13833b7e-6794-4f30-8bec-20375bd481f2-cni-cfg\") pod \"kindnet-8fgmd\" (UID: \"13833b7e-6794-4f30-8bec-20375bd481f2\") " pod="kube-system/kindnet-8fgmd"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.705602     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13833b7e-6794-4f30-8bec-20375bd481f2-xtables-lock\") pod \"kindnet-8fgmd\" (UID: \"13833b7e-6794-4f30-8bec-20375bd481f2\") " pod="kube-system/kindnet-8fgmd"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.705700     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e73b3f75-02d7-46e3-940c-ffd727e4c87d-lib-modules\") pod \"kube-proxy-6x5rb\" (UID: \"e73b3f75-02d7-46e3-940c-ffd727e4c87d\") " pod="kube-system/kube-proxy-6x5rb"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.705777     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13833b7e-6794-4f30-8bec-20375bd481f2-lib-modules\") pod \"kindnet-8fgmd\" (UID: \"13833b7e-6794-4f30-8bec-20375bd481f2\") " pod="kube-system/kindnet-8fgmd"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.705902     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e73b3f75-02d7-46e3-940c-ffd727e4c87d-xtables-lock\") pod \"kube-proxy-6x5rb\" (UID: \"e73b3f75-02d7-46e3-940c-ffd727e4c87d\") " pod="kube-system/kube-proxy-6x5rb"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.706049     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/15113825-bb63-434f-bd5e-2ffd789452d6-tmp\") pod \"storage-provisioner\" (UID: \"15113825-bb63-434f-bd5e-2ffd789452d6\") " pod="kube-system/storage-provisioner"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.763587     798 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.863460     798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-480889" podStartSLOduration=0.863439797 podStartE2EDuration="863.439797ms" podCreationTimestamp="2025-10-25 10:14:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:14:06.837585194 +0000 UTC m=+38.428191520" watchObservedRunningTime="2025-10-25 10:14:06.863439797 +0000 UTC m=+38.454046098"
	Oct 25 10:14:07 ha-480889 kubelet[798]: W1025 10:14:07.082204     798 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb/crio-221f4b21ed8c28b6fd1698347efb2e67bd612d196fc843d8d64f3be9c60b2221 WatchSource:0}: Error finding container 221f4b21ed8c28b6fd1698347efb2e67bd612d196fc843d8d64f3be9c60b2221: Status 404 returned error can't find the container with id 221f4b21ed8c28b6fd1698347efb2e67bd612d196fc843d8d64f3be9c60b2221
	Oct 25 10:14:08 ha-480889 kubelet[798]: I1025 10:14:08.400429     798 scope.go:117] "RemoveContainer" containerID="3d23dbb42715f1bb9050f4885b532a8877bf1805090fe8f7db8038e263d7391a"
	Oct 25 10:14:08 ha-480889 kubelet[798]: E1025 10:14:08.400599     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-480889_kube-system(9a81b87b3b974d940626f18d45a6aab1)\"" pod="kube-system/kube-controller-manager-ha-480889" podUID="9a81b87b3b974d940626f18d45a6aab1"
	Oct 25 10:14:20 ha-480889 kubelet[798]: I1025 10:14:20.620615     798 scope.go:117] "RemoveContainer" containerID="3d23dbb42715f1bb9050f4885b532a8877bf1805090fe8f7db8038e263d7391a"
	Oct 25 10:14:28 ha-480889 kubelet[798]: E1025 10:14:28.527965     798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"863261f33854b355adb7cf1877fbfb03718ee4a2555370d2149d1cdc448496bb\": container with ID starting with 863261f33854b355adb7cf1877fbfb03718ee4a2555370d2149d1cdc448496bb not found: ID does not exist" containerID="863261f33854b355adb7cf1877fbfb03718ee4a2555370d2149d1cdc448496bb"
	Oct 25 10:14:28 ha-480889 kubelet[798]: I1025 10:14:28.528089     798 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="863261f33854b355adb7cf1877fbfb03718ee4a2555370d2149d1cdc448496bb" err="rpc error: code = NotFound desc = could not find container \"863261f33854b355adb7cf1877fbfb03718ee4a2555370d2149d1cdc448496bb\": container with ID starting with 863261f33854b355adb7cf1877fbfb03718ee4a2555370d2149d1cdc448496bb not found: ID does not exist"
	Oct 25 10:14:37 ha-480889 kubelet[798]: I1025 10:14:37.870067     798 scope.go:117] "RemoveContainer" containerID="9e45eacfcf479b2839ca5aa015423a2b920806c92232de9220ff03c17f84e584"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-480889 -n ha-480889
helpers_test.go:269: (dbg) Run:  kubectl --context ha-480889 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-q5kt7
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-480889 describe pod busybox-7b57f96db7-q5kt7
helpers_test.go:290: (dbg) kubectl --context ha-480889 describe pod busybox-7b57f96db7-q5kt7:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-q5kt7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r5xf9 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-r5xf9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  105s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  105s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  8s    default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  8s    default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (8.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (3.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-480889" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-480889\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-480889\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-480889\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\
"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"Sta
ticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-480889
helpers_test.go:243: (dbg) docker inspect ha-480889:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb",
	        "Created": "2025-10-25T10:07:16.735876836Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 308208,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:13:21.399696936Z",
	            "FinishedAt": "2025-10-25T10:13:20.79843666Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb/hostname",
	        "HostsPath": "/var/lib/docker/containers/808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb/hosts",
	        "LogPath": "/var/lib/docker/containers/808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb/808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb-json.log",
	        "Name": "/ha-480889",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-480889:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-480889",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb",
	                "LowerDir": "/var/lib/docker/overlay2/d159db5e3fba2acaf2be751adfd990d9559a06fe4315850b3c072a95af080135-init/diff:/var/lib/docker/overlay2/a746402fafa86965bd9394d930bb96ec16982037f9f5e7f4f1de6d09576ff851/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d159db5e3fba2acaf2be751adfd990d9559a06fe4315850b3c072a95af080135/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d159db5e3fba2acaf2be751adfd990d9559a06fe4315850b3c072a95af080135/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d159db5e3fba2acaf2be751adfd990d9559a06fe4315850b3c072a95af080135/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-480889",
	                "Source": "/var/lib/docker/volumes/ha-480889/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-480889",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-480889",
	                "name.minikube.sigs.k8s.io": "ha-480889",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "791d4899d5afa7873aa61454e9b98c6bf4cae328e5fac1d61bfb6966ee8cf636",
	            "SandboxKey": "/var/run/docker/netns/791d4899d5af",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33177"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33176"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-480889": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:5c:03:eb:9b:24",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2218a4d410c8591103e2cd6973cfcc03970e864955c570ceafd8b830a42f8a91",
	                    "EndpointID": "f005f7f20c8dfee253108089d9a6288d3bd36c3e1a48e0821c1ab3d225d34362",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-480889",
	                        "808d21fd84e3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-480889 -n ha-480889
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-480889 logs -n 25: (1.32690862s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-480889 ssh -n ha-480889-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m02 sudo cat /home/docker/cp-test_ha-480889-m03_ha-480889-m02.txt                                         │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ cp      │ ha-480889 cp ha-480889-m03:/home/docker/cp-test.txt ha-480889-m04:/home/docker/cp-test_ha-480889-m03_ha-480889-m04.txt               │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m04 sudo cat /home/docker/cp-test_ha-480889-m03_ha-480889-m04.txt                                         │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ cp      │ ha-480889 cp testdata/cp-test.txt ha-480889-m04:/home/docker/cp-test.txt                                                             │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ cp      │ ha-480889 cp ha-480889-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3016407791/001/cp-test_ha-480889-m04.txt │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ cp      │ ha-480889 cp ha-480889-m04:/home/docker/cp-test.txt ha-480889:/home/docker/cp-test_ha-480889-m04_ha-480889.txt                       │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889 sudo cat /home/docker/cp-test_ha-480889-m04_ha-480889.txt                                                 │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ cp      │ ha-480889 cp ha-480889-m04:/home/docker/cp-test.txt ha-480889-m02:/home/docker/cp-test_ha-480889-m04_ha-480889-m02.txt               │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m02 sudo cat /home/docker/cp-test_ha-480889-m04_ha-480889-m02.txt                                         │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ cp      │ ha-480889 cp ha-480889-m04:/home/docker/cp-test.txt ha-480889-m03:/home/docker/cp-test_ha-480889-m04_ha-480889-m03.txt               │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ ssh     │ ha-480889 ssh -n ha-480889-m03 sudo cat /home/docker/cp-test_ha-480889-m04_ha-480889-m03.txt                                         │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ node    │ ha-480889 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ node    │ ha-480889 node start m02 --alsologtostderr -v 5                                                                                      │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:12 UTC │
	│ node    │ ha-480889 node list --alsologtostderr -v 5                                                                                           │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │                     │
	│ stop    │ ha-480889 stop --alsologtostderr -v 5                                                                                                │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:12 UTC │ 25 Oct 25 10:13 UTC │
	│ start   │ ha-480889 start --wait true --alsologtostderr -v 5                                                                                   │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:13 UTC │                     │
	│ node    │ ha-480889 node list --alsologtostderr -v 5                                                                                           │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	│ node    │ ha-480889 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-480889 │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:13:21
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:13:21.133168  308083 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:13:21.133290  308083 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:13:21.133303  308083 out.go:374] Setting ErrFile to fd 2...
	I1025 10:13:21.133309  308083 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:13:21.133562  308083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:13:21.133919  308083 out.go:368] Setting JSON to false
	I1025 10:13:21.134805  308083 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6953,"bootTime":1761380249,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 10:13:21.134877  308083 start.go:141] virtualization:  
	I1025 10:13:21.140316  308083 out.go:179] * [ha-480889] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:13:21.143327  308083 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:13:21.143404  308083 notify.go:220] Checking for updates...
	I1025 10:13:21.149301  308083 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:13:21.152089  308083 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:13:21.154925  308083 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 10:13:21.157773  308083 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:13:21.160618  308083 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:13:21.164113  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:21.164223  308083 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:13:21.197583  308083 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:13:21.197765  308083 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:13:21.253016  308083 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-25 10:13:21.243524818 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:13:21.253128  308083 docker.go:318] overlay module found
	I1025 10:13:21.256213  308083 out.go:179] * Using the docker driver based on existing profile
	I1025 10:13:21.259079  308083 start.go:305] selected driver: docker
	I1025 10:13:21.259120  308083 start.go:925] validating driver "docker" against &{Name:ha-480889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:13:21.259253  308083 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:13:21.259348  308083 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:13:21.316248  308083 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-25 10:13:21.30638419 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:13:21.316658  308083 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:13:21.316688  308083 cni.go:84] Creating CNI manager for ""
	I1025 10:13:21.316750  308083 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1025 10:13:21.316803  308083 start.go:349] cluster config:
	{Name:ha-480889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:13:21.320059  308083 out.go:179] * Starting "ha-480889" primary control-plane node in "ha-480889" cluster
	I1025 10:13:21.322881  308083 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:13:21.325849  308083 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:13:21.328624  308083 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:13:21.328676  308083 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 10:13:21.328688  308083 cache.go:58] Caching tarball of preloaded images
	I1025 10:13:21.328730  308083 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:13:21.328805  308083 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:13:21.328816  308083 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:13:21.328961  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:13:21.348972  308083 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:13:21.348996  308083 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:13:21.349014  308083 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:13:21.349046  308083 start.go:360] acquireMachinesLock for ha-480889: {Name:mk41781a5f7df8ed38323f26b29dd3de0536d841 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:13:21.349099  308083 start.go:364] duration metric: took 35.972µs to acquireMachinesLock for "ha-480889"
	I1025 10:13:21.349123  308083 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:13:21.349129  308083 fix.go:54] fixHost starting: 
	I1025 10:13:21.349386  308083 cli_runner.go:164] Run: docker container inspect ha-480889 --format={{.State.Status}}
	I1025 10:13:21.366278  308083 fix.go:112] recreateIfNeeded on ha-480889: state=Stopped err=<nil>
	W1025 10:13:21.366311  308083 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:13:21.369548  308083 out.go:252] * Restarting existing docker container for "ha-480889" ...
	I1025 10:13:21.369634  308083 cli_runner.go:164] Run: docker start ha-480889
	I1025 10:13:21.622973  308083 cli_runner.go:164] Run: docker container inspect ha-480889 --format={{.State.Status}}
	I1025 10:13:21.639685  308083 kic.go:430] container "ha-480889" state is running.
	I1025 10:13:21.640060  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889
	I1025 10:13:21.659744  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:13:21.659977  308083 machine.go:93] provisionDockerMachine start ...
	I1025 10:13:21.660037  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:21.679901  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:21.680217  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1025 10:13:21.680227  308083 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:13:21.681077  308083 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37726->127.0.0.1:33173: read: connection reset by peer
	I1025 10:13:24.829722  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-480889
	
	I1025 10:13:24.829748  308083 ubuntu.go:182] provisioning hostname "ha-480889"
	I1025 10:13:24.829819  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:24.848138  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:24.848455  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1025 10:13:24.848472  308083 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-480889 && echo "ha-480889" | sudo tee /etc/hostname
	I1025 10:13:25.012654  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-480889
	
	I1025 10:13:25.012743  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:25.031520  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:25.031847  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1025 10:13:25.031875  308083 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-480889' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-480889/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-480889' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:13:25.182388  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:13:25.182461  308083 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:13:25.182530  308083 ubuntu.go:190] setting up certificates
	I1025 10:13:25.182567  308083 provision.go:84] configureAuth start
	I1025 10:13:25.182666  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889
	I1025 10:13:25.200092  308083 provision.go:143] copyHostCerts
	I1025 10:13:25.200133  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:13:25.200165  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:13:25.200172  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:13:25.200245  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:13:25.200331  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:13:25.200352  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:13:25.200357  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:13:25.200382  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:13:25.200423  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:13:25.200438  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:13:25.200442  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:13:25.200464  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:13:25.200507  308083 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.ha-480889 san=[127.0.0.1 192.168.49.2 ha-480889 localhost minikube]
	I1025 10:13:25.925035  308083 provision.go:177] copyRemoteCerts
	I1025 10:13:25.925106  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:13:25.925148  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:25.941975  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:13:26.046168  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 10:13:26.046249  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:13:26.065892  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 10:13:26.065964  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1025 10:13:26.086519  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 10:13:26.086582  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:13:26.105106  308083 provision.go:87] duration metric: took 922.501142ms to configureAuth
	I1025 10:13:26.105133  308083 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:13:26.105365  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:26.105486  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:26.123735  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:26.124045  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I1025 10:13:26.124102  308083 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:13:26.451879  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:13:26.451953  308083 machine.go:96] duration metric: took 4.791965867s to provisionDockerMachine
	I1025 10:13:26.451985  308083 start.go:293] postStartSetup for "ha-480889" (driver="docker")
	I1025 10:13:26.452035  308083 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:13:26.452145  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:13:26.452222  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:26.474611  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:13:26.586070  308083 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:13:26.589442  308083 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:13:26.589480  308083 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:13:26.589492  308083 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:13:26.589557  308083 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:13:26.589654  308083 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:13:26.589667  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> /etc/ssl/certs/2612562.pem
	I1025 10:13:26.589769  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:13:26.597470  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:13:26.615616  308083 start.go:296] duration metric: took 163.578765ms for postStartSetup
	I1025 10:13:26.615697  308083 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:13:26.615759  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:26.632968  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:13:26.735211  308083 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:13:26.740030  308083 fix.go:56] duration metric: took 5.390893179s for fixHost
	I1025 10:13:26.740056  308083 start.go:83] releasing machines lock for "ha-480889", held for 5.390944264s
	I1025 10:13:26.740127  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889
	I1025 10:13:26.756884  308083 ssh_runner.go:195] Run: cat /version.json
	I1025 10:13:26.756940  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:26.756964  308083 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:13:26.757017  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:26.775539  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:13:26.778199  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:13:26.873785  308083 ssh_runner.go:195] Run: systemctl --version
	I1025 10:13:26.965654  308083 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:13:27.005417  308083 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:13:27.010728  308083 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:13:27.010810  308083 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:13:27.019133  308083 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:13:27.019158  308083 start.go:495] detecting cgroup driver to use...
	I1025 10:13:27.019210  308083 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:13:27.019280  308083 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:13:27.034337  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:13:27.047938  308083 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:13:27.048000  308083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:13:27.063832  308083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:13:27.081381  308083 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:13:27.198834  308083 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:13:27.303413  308083 docker.go:234] disabling docker service ...
	I1025 10:13:27.303534  308083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:13:27.318254  308083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:13:27.331149  308083 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:13:27.440477  308083 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:13:27.554598  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:13:27.567225  308083 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:13:27.581183  308083 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:13:27.581264  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.590278  308083 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:13:27.590389  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.599250  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.607897  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.616848  308083 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:13:27.625132  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.634834  308083 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.643393  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:27.653830  308083 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:13:27.661579  308083 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:13:27.669371  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:27.781686  308083 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:13:27.909770  308083 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:13:27.909891  308083 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:13:27.913604  308083 start.go:563] Will wait 60s for crictl version
	I1025 10:13:27.913677  308083 ssh_runner.go:195] Run: which crictl
	I1025 10:13:27.917354  308083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:13:27.943799  308083 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:13:27.943944  308083 ssh_runner.go:195] Run: crio --version
	I1025 10:13:27.972380  308083 ssh_runner.go:195] Run: crio --version
	I1025 10:13:28.006726  308083 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:13:28.009638  308083 cli_runner.go:164] Run: docker network inspect ha-480889 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:13:28.029757  308083 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 10:13:28.033806  308083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:13:28.045238  308083 kubeadm.go:883] updating cluster {Name:ha-480889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:13:28.046168  308083 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:13:28.046264  308083 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:13:28.081721  308083 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:13:28.081747  308083 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:13:28.081804  308083 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:13:28.109690  308083 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:13:28.109715  308083 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:13:28.109724  308083 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1025 10:13:28.109840  308083 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-480889 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:13:28.109926  308083 ssh_runner.go:195] Run: crio config
	I1025 10:13:28.181906  308083 cni.go:84] Creating CNI manager for ""
	I1025 10:13:28.181927  308083 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1025 10:13:28.181947  308083 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:13:28.181970  308083 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-480889 NodeName:ha-480889 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:13:28.182120  308083 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-480889"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:13:28.182142  308083 kube-vip.go:115] generating kube-vip config ...
	I1025 10:13:28.182194  308083 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1025 10:13:28.194754  308083 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:13:28.194852  308083 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1025 10:13:28.194915  308083 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:13:28.202716  308083 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:13:28.202791  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1025 10:13:28.211249  308083 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1025 10:13:28.224427  308083 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:13:28.236965  308083 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1025 10:13:28.249237  308083 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1025 10:13:28.261093  308083 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1025 10:13:28.265704  308083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:13:28.275389  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:28.388284  308083 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:13:28.404560  308083 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889 for IP: 192.168.49.2
	I1025 10:13:28.404624  308083 certs.go:195] generating shared ca certs ...
	I1025 10:13:28.404659  308083 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:28.404824  308083 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:13:28.404900  308083 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:13:28.404925  308083 certs.go:257] generating profile certs ...
	I1025 10:13:28.405027  308083 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.key
	I1025 10:13:28.405078  308083 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key.a013837d
	I1025 10:13:28.405107  308083 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt.a013837d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1025 10:13:29.281974  308083 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt.a013837d ...
	I1025 10:13:29.282465  308083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt.a013837d: {Name:mk2ee9cff9ddeca542ff438d607ca92d489e621a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:29.282692  308083 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key.a013837d ...
	I1025 10:13:29.282818  308083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key.a013837d: {Name:mk666a1056a90e3af7ff477b2ecc4f82c52a5311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:29.282987  308083 certs.go:382] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt.a013837d -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt
	I1025 10:13:29.283272  308083 certs.go:386] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key.a013837d -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key
	I1025 10:13:29.283463  308083 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key
	I1025 10:13:29.283498  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 10:13:29.283530  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 10:13:29.283570  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 10:13:29.283605  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 10:13:29.283633  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1025 10:13:29.283680  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1025 10:13:29.283712  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1025 10:13:29.283743  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1025 10:13:29.283826  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:13:29.283879  308083 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:13:29.283905  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:13:29.283959  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:13:29.284007  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:13:29.284066  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:13:29.284138  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:13:29.284221  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> /usr/share/ca-certificates/2612562.pem
	I1025 10:13:29.284263  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:29.284295  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem -> /usr/share/ca-certificates/261256.pem
	I1025 10:13:29.284844  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:13:29.339963  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:13:29.378039  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:13:29.412109  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:13:29.439404  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 10:13:29.471848  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:13:29.495108  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:13:29.521223  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:13:29.555889  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:13:29.583865  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:13:29.607803  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:13:29.660341  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:13:29.687106  308083 ssh_runner.go:195] Run: openssl version
	I1025 10:13:29.696444  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:13:29.707221  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:13:29.717578  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:13:29.717659  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:13:29.790492  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:13:29.802381  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:13:29.810802  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:29.815111  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:29.815223  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:29.864875  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:13:29.872882  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:13:29.882139  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:13:29.887141  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:13:29.887254  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:13:29.933083  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:13:29.942393  308083 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:13:29.946745  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:13:29.992960  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:13:30.044394  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:13:30.092620  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:13:30.151671  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:13:30.195276  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:13:30.238904  308083 kubeadm.go:400] StartCluster: {Name:ha-480889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:13:30.239101  308083 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:13:30.239204  308083 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:13:30.304407  308083 cri.go:89] found id: "07e7673199f69cfda9e91af2a66aad345a2ce7a92130398dd12fc4e17470e088"
	I1025 10:13:30.304479  308083 cri.go:89] found id: "9e3b516f6f15caae43bda25f85832b5ad9a201e6c7b833a1ba0ec9db87f687fd"
	I1025 10:13:30.304499  308083 cri.go:89] found id: "0b2d139004d5afcec6c5e7f18831bff8c069ba521b289758825ffdd6fd892697"
	I1025 10:13:30.304523  308083 cri.go:89] found id: "322c2cc726dbd336dc6d64af52ed0d7374e34249ef33e160f4bc633c2590c50d"
	I1025 10:13:30.304554  308083 cri.go:89] found id: "170a3a9364b5079051bd3c5c594733a45ac4ddd6193638cc413453308f5c0fac"
	I1025 10:13:30.304578  308083 cri.go:89] found id: ""
	I1025 10:13:30.304661  308083 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:13:30.328956  308083 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:13:30Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:13:30.329101  308083 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:13:30.340608  308083 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:13:30.340681  308083 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:13:30.340762  308083 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:13:30.351736  308083 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:13:30.352209  308083 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-480889" does not appear in /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:13:30.352379  308083 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-259409/kubeconfig needs updating (will repair): [kubeconfig missing "ha-480889" cluster setting kubeconfig missing "ha-480889" context setting]
	I1025 10:13:30.352687  308083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:30.353275  308083 kapi.go:59] client config for ha-480889: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.key", CAFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 10:13:30.354022  308083 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1025 10:13:30.354112  308083 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1025 10:13:30.354147  308083 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 10:13:30.354173  308083 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1025 10:13:30.354194  308083 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1025 10:13:30.354220  308083 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 10:13:30.354596  308083 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:13:30.369232  308083 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1025 10:13:30.369295  308083 kubeadm.go:601] duration metric: took 28.594078ms to restartPrimaryControlPlane
	I1025 10:13:30.369334  308083 kubeadm.go:402] duration metric: took 130.438978ms to StartCluster
	I1025 10:13:30.369370  308083 settings.go:142] acquiring lock: {Name:mk78071aea90144fb2e8f63b90de4792d7a3522a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:30.369458  308083 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:13:30.370118  308083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:30.370359  308083 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:13:30.370404  308083 start.go:241] waiting for startup goroutines ...
	I1025 10:13:30.370435  308083 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:13:30.370975  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:30.376476  308083 out.go:179] * Enabled addons: 
	I1025 10:13:30.379493  308083 addons.go:514] duration metric: took 9.050073ms for enable addons: enabled=[]
	I1025 10:13:30.379556  308083 start.go:246] waiting for cluster config update ...
	I1025 10:13:30.379587  308083 start.go:255] writing updated cluster config ...
	I1025 10:13:30.382748  308083 out.go:203] 
	I1025 10:13:30.385876  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:30.386069  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:13:30.389383  308083 out.go:179] * Starting "ha-480889-m02" control-plane node in "ha-480889" cluster
	I1025 10:13:30.392170  308083 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:13:30.395076  308083 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:13:30.397919  308083 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:13:30.397962  308083 cache.go:58] Caching tarball of preloaded images
	I1025 10:13:30.398098  308083 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:13:30.398132  308083 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:13:30.398282  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:13:30.398534  308083 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:13:30.435730  308083 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:13:30.435756  308083 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:13:30.435773  308083 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:13:30.435796  308083 start.go:360] acquireMachinesLock for ha-480889-m02: {Name:mk5fa3d1d910363d3e584c1db68856801d0a168a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:13:30.435853  308083 start.go:364] duration metric: took 36.152µs to acquireMachinesLock for "ha-480889-m02"
	I1025 10:13:30.435879  308083 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:13:30.435886  308083 fix.go:54] fixHost starting: m02
	I1025 10:13:30.436144  308083 cli_runner.go:164] Run: docker container inspect ha-480889-m02 --format={{.State.Status}}
	I1025 10:13:30.486709  308083 fix.go:112] recreateIfNeeded on ha-480889-m02: state=Stopped err=<nil>
	W1025 10:13:30.486741  308083 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:13:30.490037  308083 out.go:252] * Restarting existing docker container for "ha-480889-m02" ...
	I1025 10:13:30.490126  308083 cli_runner.go:164] Run: docker start ha-480889-m02
	I1025 10:13:30.892304  308083 cli_runner.go:164] Run: docker container inspect ha-480889-m02 --format={{.State.Status}}
	I1025 10:13:30.928214  308083 kic.go:430] container "ha-480889-m02" state is running.
	I1025 10:13:30.928591  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m02
	I1025 10:13:30.962308  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:13:30.962572  308083 machine.go:93] provisionDockerMachine start ...
	I1025 10:13:30.962636  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:30.991814  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:30.992103  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1025 10:13:30.992112  308083 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:13:30.992798  308083 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53254->127.0.0.1:33178: read: connection reset by peer
	I1025 10:13:34.218384  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-480889-m02
	
	I1025 10:13:34.218468  308083 ubuntu.go:182] provisioning hostname "ha-480889-m02"
	I1025 10:13:34.218568  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:34.242087  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:34.242402  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1025 10:13:34.242413  308083 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-480889-m02 && echo "ha-480889-m02" | sudo tee /etc/hostname
	I1025 10:13:34.553498  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-480889-m02
	
	I1025 10:13:34.553579  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:34.605778  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:34.606154  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1025 10:13:34.606179  308083 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-480889-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-480889-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-480889-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:13:34.786380  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:13:34.786405  308083 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:13:34.786423  308083 ubuntu.go:190] setting up certificates
	I1025 10:13:34.786433  308083 provision.go:84] configureAuth start
	I1025 10:13:34.786494  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m02
	I1025 10:13:34.812196  308083 provision.go:143] copyHostCerts
	I1025 10:13:34.812238  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:13:34.812271  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:13:34.812277  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:13:34.812354  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:13:34.812427  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:13:34.812443  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:13:34.812448  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:13:34.812473  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:13:34.812508  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:13:34.812524  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:13:34.812528  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:13:34.812550  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:13:34.812594  308083 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.ha-480889-m02 san=[127.0.0.1 192.168.49.3 ha-480889-m02 localhost minikube]
	I1025 10:13:35.433499  308083 provision.go:177] copyRemoteCerts
	I1025 10:13:35.437355  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:13:35.437432  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:35.478086  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m02/id_rsa Username:docker}
	I1025 10:13:35.600269  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 10:13:35.600335  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:13:35.625245  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 10:13:35.625308  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 10:13:35.656095  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 10:13:35.656153  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:13:35.702462  308083 provision.go:87] duration metric: took 916.014065ms to configureAuth
	I1025 10:13:35.702539  308083 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:13:35.702849  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:35.703008  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:35.743726  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:13:35.744035  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I1025 10:13:35.744050  308083 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:13:36.131741  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:13:36.131816  308083 machine.go:96] duration metric: took 5.16923304s to provisionDockerMachine
	I1025 10:13:36.131850  308083 start.go:293] postStartSetup for "ha-480889-m02" (driver="docker")
	I1025 10:13:36.131900  308083 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:13:36.132016  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:13:36.132089  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:36.151273  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m02/id_rsa Username:docker}
	I1025 10:13:36.257973  308083 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:13:36.261457  308083 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:13:36.261487  308083 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:13:36.261499  308083 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:13:36.261552  308083 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:13:36.261635  308083 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:13:36.261648  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> /etc/ssl/certs/2612562.pem
	I1025 10:13:36.261749  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:13:36.269152  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:13:36.286996  308083 start.go:296] duration metric: took 155.094351ms for postStartSetup
	I1025 10:13:36.287074  308083 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:13:36.287145  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:36.305008  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m02/id_rsa Username:docker}
	I1025 10:13:36.411951  308083 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:13:36.420078  308083 fix.go:56] duration metric: took 5.984184266s for fixHost
	I1025 10:13:36.420100  308083 start.go:83] releasing machines lock for "ha-480889-m02", held for 5.984233964s
	I1025 10:13:36.420167  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m02
	I1025 10:13:36.443663  308083 out.go:179] * Found network options:
	I1025 10:13:36.446961  308083 out.go:179]   - NO_PROXY=192.168.49.2
	W1025 10:13:36.450808  308083 proxy.go:120] fail to check proxy env: Error ip not in block
	W1025 10:13:36.450851  308083 proxy.go:120] fail to check proxy env: Error ip not in block
	I1025 10:13:36.450943  308083 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:13:36.450993  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:36.451266  308083 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:13:36.451340  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m02
	I1025 10:13:36.496453  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m02/id_rsa Username:docker}
	I1025 10:13:36.500270  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m02/id_rsa Username:docker}
	I1025 10:13:36.756746  308083 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:13:36.868709  308083 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:13:36.868786  308083 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:13:36.881721  308083 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:13:36.881748  308083 start.go:495] detecting cgroup driver to use...
	I1025 10:13:36.881782  308083 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:13:36.881843  308083 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:13:36.907834  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:13:36.928826  308083 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:13:36.928911  308083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:13:36.951297  308083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:13:36.978500  308083 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:13:37.180812  308083 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:13:37.373723  308083 docker.go:234] disabling docker service ...
	I1025 10:13:37.373791  308083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:13:37.390746  308083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:13:37.405594  308083 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:13:37.625534  308083 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:13:37.834157  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:13:37.849602  308083 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:13:37.879998  308083 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:13:37.880065  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.894893  308083 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:13:37.894974  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.912955  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.922956  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.937706  308083 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:13:37.948806  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.959464  308083 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.972181  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:13:37.983464  308083 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:13:38.003743  308083 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:13:38.037815  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:38.334072  308083 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:13:39.163742  308083 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:13:39.163831  308083 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:13:39.169004  308083 start.go:563] Will wait 60s for crictl version
	I1025 10:13:39.169072  308083 ssh_runner.go:195] Run: which crictl
	I1025 10:13:39.173735  308083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:13:39.204784  308083 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:13:39.204890  308083 ssh_runner.go:195] Run: crio --version
	I1025 10:13:39.239278  308083 ssh_runner.go:195] Run: crio --version
	I1025 10:13:39.276711  308083 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:13:39.279715  308083 out.go:179]   - env NO_PROXY=192.168.49.2
	I1025 10:13:39.282816  308083 cli_runner.go:164] Run: docker network inspect ha-480889 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:13:39.299629  308083 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 10:13:39.303856  308083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:13:39.314044  308083 mustload.go:65] Loading cluster: ha-480889
	I1025 10:13:39.314294  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:39.314598  308083 cli_runner.go:164] Run: docker container inspect ha-480889 --format={{.State.Status}}
	I1025 10:13:39.343892  308083 host.go:66] Checking if "ha-480889" exists ...
	I1025 10:13:39.344182  308083 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889 for IP: 192.168.49.3
	I1025 10:13:39.344197  308083 certs.go:195] generating shared ca certs ...
	I1025 10:13:39.344211  308083 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:13:39.344335  308083 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:13:39.344393  308083 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:13:39.344406  308083 certs.go:257] generating profile certs ...
	I1025 10:13:39.344480  308083 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.key
	I1025 10:13:39.344547  308083 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key.1eaed255
	I1025 10:13:39.344593  308083 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key
	I1025 10:13:39.344606  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 10:13:39.344620  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 10:13:39.344636  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 10:13:39.344647  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 10:13:39.344663  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1025 10:13:39.344687  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1025 10:13:39.344718  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1025 10:13:39.344732  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1025 10:13:39.344792  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:13:39.344825  308083 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:13:39.344838  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:13:39.344861  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:13:39.344888  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:13:39.344914  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:13:39.344981  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:13:39.345016  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> /usr/share/ca-certificates/2612562.pem
	I1025 10:13:39.345034  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:39.345045  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem -> /usr/share/ca-certificates/261256.pem
	I1025 10:13:39.345112  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:13:39.371934  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:13:39.470344  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1025 10:13:39.483516  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1025 10:13:39.501845  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1025 10:13:39.507200  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1025 10:13:39.527252  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1025 10:13:39.532933  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1025 10:13:39.549399  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1025 10:13:39.554586  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1025 10:13:39.570659  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1025 10:13:39.574962  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1025 10:13:39.584673  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1025 10:13:39.589172  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1025 10:13:39.598913  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:13:39.620680  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:13:39.644461  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:13:39.668589  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:13:39.692311  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 10:13:39.712807  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:13:39.739124  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:13:39.767676  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:13:39.790850  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:13:39.811105  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:13:39.833707  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:13:39.856043  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1025 10:13:39.869628  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1025 10:13:39.883404  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1025 10:13:39.897013  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1025 10:13:39.919485  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1025 10:13:39.945523  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1025 10:13:39.967210  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1025 10:13:39.994983  308083 ssh_runner.go:195] Run: openssl version
	I1025 10:13:40.002778  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:13:40.017144  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:13:40.022850  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:13:40.022982  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:13:40.073080  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:13:40.081683  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:13:40.090847  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:13:40.096142  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:13:40.096266  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:13:40.138985  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:13:40.147554  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:13:40.156382  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:40.161029  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:40.161195  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:13:40.202792  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:13:40.211314  308083 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:13:40.215961  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:13:40.258002  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:13:40.301047  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:13:40.349624  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:13:40.395242  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:13:40.444494  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:13:40.496874  308083 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1025 10:13:40.496975  308083 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-480889-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:13:40.497007  308083 kube-vip.go:115] generating kube-vip config ...
	I1025 10:13:40.497062  308083 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1025 10:13:40.539654  308083 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:13:40.539717  308083 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1025 10:13:40.539780  308083 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:13:40.558469  308083 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:13:40.558603  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1025 10:13:40.566867  308083 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1025 10:13:40.583436  308083 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:13:40.596901  308083 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1025 10:13:40.612066  308083 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1025 10:13:40.616047  308083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:13:40.627164  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:40.770079  308083 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:13:40.784212  308083 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:13:40.784687  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:13:40.790656  308083 out.go:179] * Verifying Kubernetes components...
	I1025 10:13:40.793379  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:13:40.919442  308083 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:13:40.934315  308083 kapi.go:59] client config for ha-480889: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.key", CAFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1025 10:13:40.934388  308083 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1025 10:13:40.936607  308083 node_ready.go:35] waiting up to 6m0s for node "ha-480889-m02" to be "Ready" ...
	I1025 10:14:03.978798  308083 node_ready.go:49] node "ha-480889-m02" is "Ready"
	I1025 10:14:03.978827  308083 node_ready.go:38] duration metric: took 23.042187504s for node "ha-480889-m02" to be "Ready" ...
	I1025 10:14:03.978841  308083 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:14:03.978901  308083 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:14:04.002008  308083 api_server.go:72] duration metric: took 23.217688145s to wait for apiserver process to appear ...
	I1025 10:14:04.002035  308083 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:14:04.002057  308083 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 10:14:04.065805  308083 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:14:04.065839  308083 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:14:04.502158  308083 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 10:14:04.511711  308083 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:14:04.511802  308083 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:14:05.002194  308083 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 10:14:05.013361  308083 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:14:05.013506  308083 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:14:05.503134  308083 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 10:14:05.514732  308083 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1025 10:14:05.518544  308083 api_server.go:141] control plane version: v1.34.1
	I1025 10:14:05.518622  308083 api_server.go:131] duration metric: took 1.516578961s to wait for apiserver health ...
	I1025 10:14:05.518646  308083 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:14:05.535848  308083 system_pods.go:59] 26 kube-system pods found
	I1025 10:14:05.535941  308083 system_pods.go:61] "coredns-66bc5c9577-ctnsn" [4c76c01c-15ed-4930-ac1a-1e2bf7de3961] Running
	I1025 10:14:05.535963  308083 system_pods.go:61] "coredns-66bc5c9577-h4lrc" [ade89685-c5d2-4e4e-847d-7af6cb3fb862] Running
	I1025 10:14:05.535986  308083 system_pods.go:61] "etcd-ha-480889" [e343e174-731b-4eb7-97df-0220f254bfcf] Running
	I1025 10:14:05.536032  308083 system_pods.go:61] "etcd-ha-480889-m02" [52f56789-d8bf-4251-9316-a0b572f65125] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:14:05.536059  308083 system_pods.go:61] "etcd-ha-480889-m03" [7fb90646-4b60-4cc2-a527-c7e563bb182b] Running
	I1025 10:14:05.536100  308083 system_pods.go:61] "kindnet-227ts" [c2c62be9-5d6e-4a43-9eff-9a7e220282d7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 10:14:05.536125  308083 system_pods.go:61] "kindnet-2fqxj" [da4ef885-af3d-4ee3-9151-cdca0253c911] Running
	I1025 10:14:05.536154  308083 system_pods.go:61] "kindnet-8fgmd" [13833b7e-6794-4f30-8bec-20375bd481f2] Running
	I1025 10:14:05.536192  308083 system_pods.go:61] "kindnet-92p8z" [c1f4d260-381c-42d8-a8a5-77ae60cf42c6] Running
	I1025 10:14:05.536214  308083 system_pods.go:61] "kube-apiserver-ha-480889" [3f293b6b-7247-48a0-aa80-508696bea727] Running
	I1025 10:14:05.536251  308083 system_pods.go:61] "kube-apiserver-ha-480889-m02" [faae5baa-e581-4254-b659-0687cfebfb67] Running
	I1025 10:14:05.536276  308083 system_pods.go:61] "kube-apiserver-ha-480889-m03" [f18f8a4d-22bd-48e4-9b23-e5383f2fce25] Running
	I1025 10:14:05.536299  308083 system_pods.go:61] "kube-controller-manager-ha-480889" [6c111362-d576-4cb0-b102-086f180ff7b7] Running
	I1025 10:14:05.536340  308083 system_pods.go:61] "kube-controller-manager-ha-480889-m02" [443192d3-d7a3-40c4-99bf-2a1eac354f88] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:14:05.536367  308083 system_pods.go:61] "kube-controller-manager-ha-480889-m03" [c5d29ad2-f161-4c39-9de4-35916c43e02b] Running
	I1025 10:14:05.536392  308083 system_pods.go:61] "kube-proxy-29hlq" [2c0b691f-c26f-49bd-9b8b-39819ca8539d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 10:14:05.536425  308083 system_pods.go:61] "kube-proxy-4d5ks" [058d38d9-4dec-40ff-ac68-9651d27ba0c6] Running
	I1025 10:14:05.536449  308083 system_pods.go:61] "kube-proxy-6x5rb" [e73b3f75-02d7-46e3-940c-ffd727e4c87d] Running
	I1025 10:14:05.536471  308083 system_pods.go:61] "kube-proxy-9rtcs" [6fd17399-e636-4de6-aa9c-e0e3d3656c41] Running
	I1025 10:14:05.536506  308083 system_pods.go:61] "kube-scheduler-ha-480889" [9036810d-dce1-4542-ac53-b5d70020809c] Running
	I1025 10:14:05.536532  308083 system_pods.go:61] "kube-scheduler-ha-480889-m02" [f4c7c190-55e0-4bbf-9c22-fe9b3d8fc98d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:14:05.536556  308083 system_pods.go:61] "kube-scheduler-ha-480889-m03" [fdcb0331-d8b0-4fb0-9549-459e365b5863] Running
	I1025 10:14:05.536591  308083 system_pods.go:61] "kube-vip-ha-480889" [07959933-b7f0-46ad-9fa2-d9c661db7882] Running
	I1025 10:14:05.536614  308083 system_pods.go:61] "kube-vip-ha-480889-m02" [fea939ce-de9c-446b-b961-37a72c945913] Running
	I1025 10:14:05.536639  308083 system_pods.go:61] "kube-vip-ha-480889-m03" [f2a5dbed-19e6-4092-8340-c798578dfd40] Running
	I1025 10:14:05.536679  308083 system_pods.go:61] "storage-provisioner" [15113825-bb63-434f-bd5e-2ffd789452d6] Running
	I1025 10:14:05.536705  308083 system_pods.go:74] duration metric: took 18.038599ms to wait for pod list to return data ...
	I1025 10:14:05.536727  308083 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:14:05.551153  308083 default_sa.go:45] found service account: "default"
	I1025 10:14:05.551231  308083 default_sa.go:55] duration metric: took 14.469512ms for default service account to be created ...
	I1025 10:14:05.551256  308083 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:14:05.562144  308083 system_pods.go:86] 26 kube-system pods found
	I1025 10:14:05.562232  308083 system_pods.go:89] "coredns-66bc5c9577-ctnsn" [4c76c01c-15ed-4930-ac1a-1e2bf7de3961] Running
	I1025 10:14:05.562257  308083 system_pods.go:89] "coredns-66bc5c9577-h4lrc" [ade89685-c5d2-4e4e-847d-7af6cb3fb862] Running
	I1025 10:14:05.562298  308083 system_pods.go:89] "etcd-ha-480889" [e343e174-731b-4eb7-97df-0220f254bfcf] Running
	I1025 10:14:05.562329  308083 system_pods.go:89] "etcd-ha-480889-m02" [52f56789-d8bf-4251-9316-a0b572f65125] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:14:05.562357  308083 system_pods.go:89] "etcd-ha-480889-m03" [7fb90646-4b60-4cc2-a527-c7e563bb182b] Running
	I1025 10:14:05.562400  308083 system_pods.go:89] "kindnet-227ts" [c2c62be9-5d6e-4a43-9eff-9a7e220282d7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 10:14:05.562424  308083 system_pods.go:89] "kindnet-2fqxj" [da4ef885-af3d-4ee3-9151-cdca0253c911] Running
	I1025 10:14:05.562452  308083 system_pods.go:89] "kindnet-8fgmd" [13833b7e-6794-4f30-8bec-20375bd481f2] Running
	I1025 10:14:05.562486  308083 system_pods.go:89] "kindnet-92p8z" [c1f4d260-381c-42d8-a8a5-77ae60cf42c6] Running
	I1025 10:14:05.562513  308083 system_pods.go:89] "kube-apiserver-ha-480889" [3f293b6b-7247-48a0-aa80-508696bea727] Running
	I1025 10:14:05.562563  308083 system_pods.go:89] "kube-apiserver-ha-480889-m02" [faae5baa-e581-4254-b659-0687cfebfb67] Running
	I1025 10:14:05.562590  308083 system_pods.go:89] "kube-apiserver-ha-480889-m03" [f18f8a4d-22bd-48e4-9b23-e5383f2fce25] Running
	I1025 10:14:05.562616  308083 system_pods.go:89] "kube-controller-manager-ha-480889" [6c111362-d576-4cb0-b102-086f180ff7b7] Running
	I1025 10:14:05.562658  308083 system_pods.go:89] "kube-controller-manager-ha-480889-m02" [443192d3-d7a3-40c4-99bf-2a1eac354f88] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:14:05.562685  308083 system_pods.go:89] "kube-controller-manager-ha-480889-m03" [c5d29ad2-f161-4c39-9de4-35916c43e02b] Running
	I1025 10:14:05.562729  308083 system_pods.go:89] "kube-proxy-29hlq" [2c0b691f-c26f-49bd-9b8b-39819ca8539d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 10:14:05.562755  308083 system_pods.go:89] "kube-proxy-4d5ks" [058d38d9-4dec-40ff-ac68-9651d27ba0c6] Running
	I1025 10:14:05.562843  308083 system_pods.go:89] "kube-proxy-6x5rb" [e73b3f75-02d7-46e3-940c-ffd727e4c87d] Running
	I1025 10:14:05.562883  308083 system_pods.go:89] "kube-proxy-9rtcs" [6fd17399-e636-4de6-aa9c-e0e3d3656c41] Running
	I1025 10:14:05.562903  308083 system_pods.go:89] "kube-scheduler-ha-480889" [9036810d-dce1-4542-ac53-b5d70020809c] Running
	I1025 10:14:05.562928  308083 system_pods.go:89] "kube-scheduler-ha-480889-m02" [f4c7c190-55e0-4bbf-9c22-fe9b3d8fc98d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:14:05.562965  308083 system_pods.go:89] "kube-scheduler-ha-480889-m03" [fdcb0331-d8b0-4fb0-9549-459e365b5863] Running
	I1025 10:14:05.562991  308083 system_pods.go:89] "kube-vip-ha-480889" [07959933-b7f0-46ad-9fa2-d9c661db7882] Running
	I1025 10:14:05.563016  308083 system_pods.go:89] "kube-vip-ha-480889-m02" [fea939ce-de9c-446b-b961-37a72c945913] Running
	I1025 10:14:05.563070  308083 system_pods.go:89] "kube-vip-ha-480889-m03" [f2a5dbed-19e6-4092-8340-c798578dfd40] Running
	I1025 10:14:05.563096  308083 system_pods.go:89] "storage-provisioner" [15113825-bb63-434f-bd5e-2ffd789452d6] Running
	I1025 10:14:05.563122  308083 system_pods.go:126] duration metric: took 11.844458ms to wait for k8s-apps to be running ...
	I1025 10:14:05.563161  308083 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:14:05.563251  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:14:05.583878  308083 system_svc.go:56] duration metric: took 20.700093ms WaitForService to wait for kubelet
	I1025 10:14:05.583959  308083 kubeadm.go:586] duration metric: took 24.799662385s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:14:05.584013  308083 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:14:05.602014  308083 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:14:05.602101  308083 node_conditions.go:123] node cpu capacity is 2
	I1025 10:14:05.602129  308083 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:14:05.602149  308083 node_conditions.go:123] node cpu capacity is 2
	I1025 10:14:05.602183  308083 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:14:05.602208  308083 node_conditions.go:123] node cpu capacity is 2
	I1025 10:14:05.602232  308083 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:14:05.602268  308083 node_conditions.go:123] node cpu capacity is 2
	I1025 10:14:05.602294  308083 node_conditions.go:105] duration metric: took 18.245402ms to run NodePressure ...
	I1025 10:14:05.602322  308083 start.go:241] waiting for startup goroutines ...
	I1025 10:14:05.602372  308083 start.go:255] writing updated cluster config ...
	I1025 10:14:05.606107  308083 out.go:203] 
	I1025 10:14:05.609375  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:14:05.609570  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:14:05.612923  308083 out.go:179] * Starting "ha-480889-m03" control-plane node in "ha-480889" cluster
	I1025 10:14:05.616650  308083 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:14:05.619578  308083 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:14:05.622647  308083 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:14:05.622723  308083 cache.go:58] Caching tarball of preloaded images
	I1025 10:14:05.622730  308083 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:14:05.622888  308083 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:14:05.622906  308083 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:14:05.623058  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:14:05.644689  308083 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:14:05.644714  308083 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:14:05.644728  308083 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:14:05.644760  308083 start.go:360] acquireMachinesLock for ha-480889-m03: {Name:mkdc7aead07cc61c4483ca641c0f901f32cc9e0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:14:05.644832  308083 start.go:364] duration metric: took 40.6µs to acquireMachinesLock for "ha-480889-m03"
	I1025 10:14:05.644859  308083 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:14:05.644869  308083 fix.go:54] fixHost starting: m03
	I1025 10:14:05.645136  308083 cli_runner.go:164] Run: docker container inspect ha-480889-m03 --format={{.State.Status}}
	I1025 10:14:05.665455  308083 fix.go:112] recreateIfNeeded on ha-480889-m03: state=Stopped err=<nil>
	W1025 10:14:05.665482  308083 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:14:05.668964  308083 out.go:252] * Restarting existing docker container for "ha-480889-m03" ...
	I1025 10:14:05.669067  308083 cli_runner.go:164] Run: docker start ha-480889-m03
	I1025 10:14:06.010869  308083 cli_runner.go:164] Run: docker container inspect ha-480889-m03 --format={{.State.Status}}
	I1025 10:14:06.033631  308083 kic.go:430] container "ha-480889-m03" state is running.
	I1025 10:14:06.034025  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m03
	I1025 10:14:06.062398  308083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/config.json ...
	I1025 10:14:06.062842  308083 machine.go:93] provisionDockerMachine start ...
	I1025 10:14:06.062924  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:06.096711  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:14:06.097013  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1025 10:14:06.097022  308083 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:14:06.100286  308083 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44394->127.0.0.1:33183: read: connection reset by peer
	I1025 10:14:09.422447  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-480889-m03
	
	I1025 10:14:09.422528  308083 ubuntu.go:182] provisioning hostname "ha-480889-m03"
	I1025 10:14:09.422611  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:09.454682  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:14:09.454994  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1025 10:14:09.455005  308083 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-480889-m03 && echo "ha-480889-m03" | sudo tee /etc/hostname
	I1025 10:14:09.716055  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-480889-m03
	
	I1025 10:14:09.716202  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:09.758198  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:14:09.758502  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1025 10:14:09.758518  308083 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-480889-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-480889-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-480889-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:14:09.952740  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:14:09.952771  308083 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:14:09.952843  308083 ubuntu.go:190] setting up certificates
	I1025 10:14:09.952854  308083 provision.go:84] configureAuth start
	I1025 10:14:09.952966  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m03
	I1025 10:14:10.002091  308083 provision.go:143] copyHostCerts
	I1025 10:14:10.002146  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:14:10.002194  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:14:10.002207  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:14:10.002336  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:14:10.002445  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:14:10.002473  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:14:10.002482  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:14:10.002512  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:14:10.002620  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:14:10.002645  308083 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:14:10.002656  308083 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:14:10.002686  308083 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:14:10.002748  308083 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.ha-480889-m03 san=[127.0.0.1 192.168.49.4 ha-480889-m03 localhost minikube]
	I1025 10:14:10.250973  308083 provision.go:177] copyRemoteCerts
	I1025 10:14:10.251332  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:14:10.251408  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:10.289237  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m03/id_rsa Username:docker}
	I1025 10:14:10.436731  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 10:14:10.436797  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 10:14:10.544747  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 10:14:10.544817  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:14:10.630377  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 10:14:10.630464  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:14:10.673862  308083 provision.go:87] duration metric: took 720.988399ms to configureAuth
	I1025 10:14:10.673890  308083 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:14:10.674168  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:14:10.674521  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:10.707641  308083 main.go:141] libmachine: Using SSH client type: native
	I1025 10:14:10.707938  308083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I1025 10:14:10.707957  308083 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:14:11.154845  308083 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:14:11.154927  308083 machine.go:96] duration metric: took 5.092069874s to provisionDockerMachine
	I1025 10:14:11.154954  308083 start.go:293] postStartSetup for "ha-480889-m03" (driver="docker")
	I1025 10:14:11.154994  308083 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:14:11.155090  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:14:11.155169  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:11.175592  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m03/id_rsa Username:docker}
	I1025 10:14:11.283365  308083 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:14:11.286806  308083 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:14:11.286877  308083 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:14:11.286905  308083 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:14:11.286994  308083 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:14:11.287123  308083 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:14:11.287171  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> /etc/ssl/certs/2612562.pem
	I1025 10:14:11.287295  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:14:11.295059  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:14:11.316093  308083 start.go:296] duration metric: took 161.095107ms for postStartSetup
	I1025 10:14:11.316217  308083 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:14:11.316276  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:11.333862  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m03/id_rsa Username:docker}
	I1025 10:14:11.435204  308083 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:14:11.440180  308083 fix.go:56] duration metric: took 5.79530454s for fixHost
	I1025 10:14:11.440241  308083 start.go:83] releasing machines lock for "ha-480889-m03", held for 5.795361279s
	I1025 10:14:11.440311  308083 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m03
	I1025 10:14:11.464304  308083 out.go:179] * Found network options:
	I1025 10:14:11.467314  308083 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1025 10:14:11.470389  308083 proxy.go:120] fail to check proxy env: Error ip not in block
	W1025 10:14:11.470430  308083 proxy.go:120] fail to check proxy env: Error ip not in block
	W1025 10:14:11.470457  308083 proxy.go:120] fail to check proxy env: Error ip not in block
	W1025 10:14:11.470474  308083 proxy.go:120] fail to check proxy env: Error ip not in block
	I1025 10:14:11.470546  308083 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:14:11.470610  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:11.470888  308083 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:14:11.470954  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:14:11.492648  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m03/id_rsa Username:docker}
	I1025 10:14:11.500283  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m03/id_rsa Username:docker}
	I1025 10:14:11.796571  308083 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:14:11.919974  308083 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:14:11.920047  308083 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:14:11.930959  308083 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:14:11.931034  308083 start.go:495] detecting cgroup driver to use...
	I1025 10:14:11.931084  308083 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:14:11.931150  308083 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:14:11.976106  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:14:12.014574  308083 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:14:12.014688  308083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:14:12.063668  308083 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:14:12.091979  308083 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:14:12.314959  308083 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:14:12.575887  308083 docker.go:234] disabling docker service ...
	I1025 10:14:12.575989  308083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:14:12.601545  308083 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:14:12.619323  308083 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:14:12.867377  308083 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:14:13.108726  308083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:14:13.127994  308083 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:14:13.145943  308083 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:14:13.146033  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.156671  308083 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:14:13.156750  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.168655  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.184089  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.194894  308083 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:14:13.204315  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.214077  308083 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.224397  308083 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:14:13.234566  308083 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:14:13.243678  308083 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:14:13.253013  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:14:13.493138  308083 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:15:43.813681  308083 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.320502184s)
	I1025 10:15:43.813712  308083 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:15:43.813771  308083 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:15:43.818284  308083 start.go:563] Will wait 60s for crictl version
	I1025 10:15:43.818348  308083 ssh_runner.go:195] Run: which crictl
	I1025 10:15:43.822612  308083 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:15:43.849591  308083 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:15:43.849679  308083 ssh_runner.go:195] Run: crio --version
	I1025 10:15:43.881155  308083 ssh_runner.go:195] Run: crio --version
	I1025 10:15:43.916090  308083 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:15:43.919321  308083 out.go:179]   - env NO_PROXY=192.168.49.2
	I1025 10:15:43.922326  308083 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1025 10:15:43.925259  308083 cli_runner.go:164] Run: docker network inspect ha-480889 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:15:43.954223  308083 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 10:15:43.958732  308083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:15:43.969465  308083 mustload.go:65] Loading cluster: ha-480889
	I1025 10:15:43.969714  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:15:43.969954  308083 cli_runner.go:164] Run: docker container inspect ha-480889 --format={{.State.Status}}
	I1025 10:15:43.987361  308083 host.go:66] Checking if "ha-480889" exists ...
	I1025 10:15:43.987646  308083 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889 for IP: 192.168.49.4
	I1025 10:15:43.987660  308083 certs.go:195] generating shared ca certs ...
	I1025 10:15:43.987675  308083 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:15:43.987792  308083 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:15:43.987838  308083 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:15:43.987850  308083 certs.go:257] generating profile certs ...
	I1025 10:15:43.987924  308083 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.key
	I1025 10:15:43.987987  308083 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key.7d4a26e1
	I1025 10:15:43.988022  308083 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key
	I1025 10:15:43.988030  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 10:15:43.988044  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 10:15:43.988056  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 10:15:43.988066  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 10:15:43.988076  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1025 10:15:43.988088  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1025 10:15:43.988099  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1025 10:15:43.988111  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1025 10:15:43.988160  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:15:43.988188  308083 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:15:43.988197  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:15:43.988222  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:15:43.988244  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:15:43.988266  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:15:43.988306  308083 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:15:43.988330  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> /usr/share/ca-certificates/2612562.pem
	I1025 10:15:43.988342  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:15:43.988353  308083 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem -> /usr/share/ca-certificates/261256.pem
	I1025 10:15:43.988408  308083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:15:44.012522  308083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:15:44.114325  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1025 10:15:44.118993  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1025 10:15:44.127630  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1025 10:15:44.131303  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1025 10:15:44.140046  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1025 10:15:44.144492  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1025 10:15:44.154086  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1025 10:15:44.158181  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1025 10:15:44.167518  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1025 10:15:44.171723  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1025 10:15:44.181427  308083 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1025 10:15:44.185332  308083 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1025 10:15:44.194266  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:15:44.214098  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:15:44.234054  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:15:44.256195  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:15:44.279031  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 10:15:44.299344  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:15:44.323793  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:15:44.345417  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:15:44.365719  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:15:44.388245  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:15:44.408144  308083 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:15:44.428098  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1025 10:15:44.441938  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1025 10:15:44.457102  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1025 10:15:44.471357  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1025 10:15:44.485615  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1025 10:15:44.498465  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1025 10:15:44.511910  308083 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1025 10:15:44.531258  308083 ssh_runner.go:195] Run: openssl version
	I1025 10:15:44.540606  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:15:44.550354  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:15:44.554246  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:15:44.554361  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:15:44.602272  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:15:44.611902  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:15:44.622835  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:15:44.629226  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:15:44.629299  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:15:44.670883  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:15:44.679524  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:15:44.689802  308083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:15:44.693893  308083 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:15:44.694068  308083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:15:44.735651  308083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:15:44.743736  308083 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:15:44.747896  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:15:44.790110  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:15:44.832406  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:15:44.874662  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:15:44.915849  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:15:44.959092  308083 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:15:45.002430  308083 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1025 10:15:45.002579  308083 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-480889-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-480889 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:15:45.002609  308083 kube-vip.go:115] generating kube-vip config ...
	I1025 10:15:45.002683  308083 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1025 10:15:45.029854  308083 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:15:45.029925  308083 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1025 10:15:45.030057  308083 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:15:45.063539  308083 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:15:45.063684  308083 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1025 10:15:45.095087  308083 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1025 10:15:45.131847  308083 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:15:45.152140  308083 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1025 10:15:45.177067  308083 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1025 10:15:45.183642  308083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:15:45.224794  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:15:45.476283  308083 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:15:45.492420  308083 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:15:45.492955  308083 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:15:45.496345  308083 out.go:179] * Verifying Kubernetes components...
	I1025 10:15:45.499247  308083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:15:45.679197  308083 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:15:45.698347  308083 kapi.go:59] client config for ha-480889: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/ha-480889/client.key", CAFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1025 10:15:45.698425  308083 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1025 10:15:45.698682  308083 node_ready.go:35] waiting up to 6m0s for node "ha-480889-m03" to be "Ready" ...
	W1025 10:15:47.704097  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:15:50.202392  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:15:52.202756  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:15:54.702519  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:15:56.703376  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:15:59.202538  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:01.203022  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:03.702456  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:05.702876  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:08.203621  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:10.702751  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:12.702907  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:14.703027  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:17.202640  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:19.702153  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:21.702531  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:24.202537  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:26.203031  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:28.703368  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:31.203812  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:33.702780  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:35.702906  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:38.202338  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:40.203167  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:42.702490  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:44.702835  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:47.202196  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:49.202526  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:51.202870  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:53.702683  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:55.703197  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:16:58.202338  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:00.206336  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:02.702377  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:04.702956  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:06.703174  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:09.203441  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:11.702806  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:13.710569  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:16.202672  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:18.702234  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:20.702880  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:23.202095  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:25.702246  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:28.202837  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:30.702454  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:33.202247  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:35.202785  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:37.203762  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:39.204260  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:41.702127  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:43.702287  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:45.703093  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:48.201849  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:50.202459  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:52.702854  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:55.202331  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:57.203185  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:17:59.702528  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:01.703331  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:04.202726  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:06.703053  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:09.204373  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:11.701897  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:13.702021  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:15.702198  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:17.702881  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:19.703383  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:22.202517  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:24.702492  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:26.702790  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:28.703087  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:30.703165  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:33.201850  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:35.202988  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:37.702399  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:40.203419  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:42.706153  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:45.204630  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:47.702150  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:49.702926  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:52.202337  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:54.202520  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:56.205653  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:18:58.703493  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:01.202899  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:03.703116  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:06.202631  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:08.702752  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:11.202319  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:13.202983  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:15.702637  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:17.702727  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:19.703012  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:22.202503  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:24.202611  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:26.203025  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:28.703023  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:30.704166  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:33.202334  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:35.202526  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:37.702970  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:40.202164  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:42.209403  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:44.702793  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:47.202837  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:49.203091  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:51.702900  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:54.202265  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:56.202768  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:19:58.701910  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:00.709849  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:03.202825  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:05.202888  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:07.203134  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:09.702506  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:11.702866  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:13.703432  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:16.203193  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:18.702327  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:21.202730  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:23.701909  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:25.702331  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:28.203063  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:30.702507  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:33.202600  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:35.701600  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:37.705510  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:40.203102  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:42.203440  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:44.702386  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:46.703047  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:49.202068  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:51.202968  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:53.701922  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:56.202876  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:20:58.702653  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:00.702696  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:03.202988  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:05.702469  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:08.203875  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:10.702312  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:12.702439  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:15.202672  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:17.203173  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:19.702749  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:22.202730  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:24.211889  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:26.702317  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:28.703052  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:31.202771  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:33.702614  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:35.702649  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:37.702884  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:40.202737  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:42.203329  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	W1025 10:21:44.702141  308083 node_ready.go:57] node "ha-480889-m03" has "Ready":"Unknown" status (will retry)
	I1025 10:21:45.699723  308083 node_ready.go:38] duration metric: took 6m0.00101372s for node "ha-480889-m03" to be "Ready" ...
	I1025 10:21:45.702936  308083 out.go:203] 
	W1025 10:21:45.705812  308083 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1025 10:21:45.705837  308083 out.go:285] * 
	W1025 10:21:45.708064  308083 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 10:21:45.711065  308083 out.go:203] 
	
	
	==> CRI-O <==
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.872198639Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=aa627551-b4d5-499e-bdd6-7970bf78bb8e name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.8732321Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=3a445103-3339-479a-b20e-1c8b913f81b0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.873332589Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.879036798Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.879394684Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b711fedb8e2618cc0f4b880fad10f4bf8b29d19e8ac5c5fbc1ffc64bd2f05ae5/merged/etc/passwd: no such file or directory"
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.879492153Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b711fedb8e2618cc0f4b880fad10f4bf8b29d19e8ac5c5fbc1ffc64bd2f05ae5/merged/etc/group: no such file or directory"
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.879805058Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.90911303Z" level=info msg="Created container 259b995f91b9c68705817e45cb74e856232ab4b1d45cae1a557d2406942ace53: kube-system/storage-provisioner/storage-provisioner" id=3a445103-3339-479a-b20e-1c8b913f81b0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.910291953Z" level=info msg="Starting container: 259b995f91b9c68705817e45cb74e856232ab4b1d45cae1a557d2406942ace53" id=376239e4-87b0-44a9-9df0-3c3e5353824a name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:14:37 ha-480889 crio[665]: time="2025-10-25T10:14:37.917558356Z" level=info msg="Started container" PID=1402 containerID=259b995f91b9c68705817e45cb74e856232ab4b1d45cae1a557d2406942ace53 description=kube-system/storage-provisioner/storage-provisioner id=376239e4-87b0-44a9-9df0-3c3e5353824a name=/runtime.v1.RuntimeService/StartContainer sandboxID=088d0d7b8bf0c2f621c0ae22566dca0cf1d81367602172bfbbd843248aea9931
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.555322496Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.559161147Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.559205784Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.559228766Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.563121449Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.563158898Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.563183087Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.573678291Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.573872779Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.574376406Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.578374066Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.578414764Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.578445763Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.582936063Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:14:47 ha-480889 crio[665]: time="2025-10-25T10:14:47.582996642Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	259b995f91b9c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Running             storage-provisioner       2                   088d0d7b8bf0c       storage-provisioner                 kube-system
	3d17e8c3e629c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   7 minutes ago       Running             kube-controller-manager   3                   8cbe108c8dc1a       kube-controller-manager-ha-480889   kube-system
	8a6f8ac4178b1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   a4778b8bb50e2       coredns-66bc5c9577-h4lrc            kube-system
	2c07e2732f356       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 minutes ago       Running             kube-proxy                1                   221f4b21ed8c2       kube-proxy-6x5rb                    kube-system
	8b6196b876372       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   1                   8039291c91840       busybox-7b57f96db7-wkwwg            default
	15568eac2b869       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   badf118cbd9c1       coredns-66bc5c9577-ctnsn            kube-system
	9e45eacfcf479       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Exited              storage-provisioner       1                   088d0d7b8bf0c       storage-provisioner                 kube-system
	fbcc0424a1c5f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               1                   c3c6117dfa2fc       kindnet-8fgmd                       kube-system
	3d23dbb42715f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   8 minutes ago       Exited              kube-controller-manager   2                   8cbe108c8dc1a       kube-controller-manager-ha-480889   kube-system
	07e7673199f69       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Running             etcd                      1                   f30b7eb202966       etcd-ha-480889                      kube-system
	0b2d139004d5a       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago       Running             kube-vip                  0                   dfd777c7213ec       kube-vip-ha-480889                  kube-system
	322c2cc726dbd       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            1                   b85982ed8ef84       kube-scheduler-ha-480889            kube-system
	170a3a9364b50       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   8 minutes ago       Running             kube-apiserver            1                   f6ee90b1515bb       kube-apiserver-ha-480889            kube-system
	
	
	==> coredns [15568eac2b869838ebb71f6d12525ec66bc41f9aa490cf1a68c490999f19b9d6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56266 - 39256 "HINFO IN 6126263590743240156.8598032974753550859. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030490651s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [8a6f8ac4178b104f0091791bd890925441e209f21434df4df270395089143c26] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56215 - 36101 "HINFO IN 38725101095574367.261866642865519352. udp 54 false 512" NXDOMAIN qr,rd,ra 54 0.011735961s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-480889
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-480889
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=ha-480889
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_07_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:07:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-480889
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:21:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:21:12 +0000   Sat, 25 Oct 2025 10:07:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:21:12 +0000   Sat, 25 Oct 2025 10:07:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:21:12 +0000   Sat, 25 Oct 2025 10:07:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:21:12 +0000   Sat, 25 Oct 2025 10:14:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-480889
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                8216dfdd-af7a-457f-ad51-df588b2f2c14
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-wkwwg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-ctnsn             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 coredns-66bc5c9577-h4lrc             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-ha-480889                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-8fgmd                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-480889             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-480889    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-6x5rb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-480889             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-480889                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m52s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   Starting                 7m50s                  kube-proxy       
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-480889 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)      kubelet          Node ha-480889 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-480889 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-480889 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-480889 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-480889 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                    node-controller  Node ha-480889 event: Registered Node ha-480889 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-480889 event: Registered Node ha-480889 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-480889 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-480889 event: Registered Node ha-480889 in Controller
	  Normal   NodeHasSufficientMemory  8m30s (x8 over 8m30s)  kubelet          Node ha-480889 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m30s (x8 over 8m30s)  kubelet          Node ha-480889 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m30s (x8 over 8m30s)  kubelet          Node ha-480889 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m47s                  node-controller  Node ha-480889 event: Registered Node ha-480889 in Controller
	  Normal   RegisteredNode           7m35s                  node-controller  Node ha-480889 event: Registered Node ha-480889 in Controller
	
	
	Name:               ha-480889-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-480889-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=ha-480889
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_25T10_08_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:08:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-480889-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:21:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:21:49 +0000   Sat, 25 Oct 2025 10:08:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:21:49 +0000   Sat, 25 Oct 2025 10:08:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:21:49 +0000   Sat, 25 Oct 2025 10:08:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:21:49 +0000   Sat, 25 Oct 2025 10:09:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-480889-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                ff971242-1f4f-45cd-b767-f92823ae34e7
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-cmlf6                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-480889-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-227ts                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-480889-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-480889-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-29hlq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-480889-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-480889-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m42s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   CIDRAssignmentFailed     13m                    cidrAllocator    Node ha-480889-m02 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           13m                    node-controller  Node ha-480889-m02 event: Registered Node ha-480889-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-480889-m02 event: Registered Node ha-480889-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-480889-m02 event: Registered Node ha-480889-m02 in Controller
	  Normal   NodeHasSufficientPID     9m32s (x8 over 9m32s)  kubelet          Node ha-480889-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 9m32s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m32s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m32s (x8 over 9m32s)  kubelet          Node ha-480889-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m32s (x8 over 9m32s)  kubelet          Node ha-480889-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 8m26s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m26s (x8 over 8m26s)  kubelet          Node ha-480889-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m26s (x8 over 8m26s)  kubelet          Node ha-480889-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m26s (x8 over 8m26s)  kubelet          Node ha-480889-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m48s                  node-controller  Node ha-480889-m02 event: Registered Node ha-480889-m02 in Controller
	  Normal   RegisteredNode           7m36s                  node-controller  Node ha-480889-m02 event: Registered Node ha-480889-m02 in Controller
	
	
	Name:               ha-480889-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-480889-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=ha-480889
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_25T10_11_08_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:11:07 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-480889-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:12:49 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 25 Oct 2025 10:11:49 +0000   Sat, 25 Oct 2025 10:15:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 25 Oct 2025 10:11:49 +0000   Sat, 25 Oct 2025 10:15:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 25 Oct 2025 10:11:49 +0000   Sat, 25 Oct 2025 10:15:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 25 Oct 2025 10:11:49 +0000   Sat, 25 Oct 2025 10:15:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-480889-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                cf43d700-f979-45ff-9dc8-5f80581e56db
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2fqxj       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-9rtcs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-480889-m04 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-480889-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-480889-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-480889-m04 event: Registered Node ha-480889-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-480889-m04 event: Registered Node ha-480889-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-480889-m04 event: Registered Node ha-480889-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-480889-m04 status is now: NodeReady
	  Normal   RegisteredNode           7m48s              node-controller  Node ha-480889-m04 event: Registered Node ha-480889-m04 in Controller
	  Normal   RegisteredNode           7m36s              node-controller  Node ha-480889-m04 event: Registered Node ha-480889-m04 in Controller
	  Normal   NodeNotReady             6m58s              node-controller  Node ha-480889-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Oct25 09:34] overlayfs: idmapped layers are currently not supported
	[ +28.289706] overlayfs: idmapped layers are currently not supported
	[Oct25 09:35] overlayfs: idmapped layers are currently not supported
	[Oct25 09:36] overlayfs: idmapped layers are currently not supported
	[ +24.160248] overlayfs: idmapped layers are currently not supported
	[Oct25 09:37] overlayfs: idmapped layers are currently not supported
	[  +8.216028] overlayfs: idmapped layers are currently not supported
	[Oct25 09:38] overlayfs: idmapped layers are currently not supported
	[Oct25 09:39] overlayfs: idmapped layers are currently not supported
	[Oct25 09:41] overlayfs: idmapped layers are currently not supported
	[ +14.126672] overlayfs: idmapped layers are currently not supported
	[Oct25 09:42] overlayfs: idmapped layers are currently not supported
	[Oct25 09:43] overlayfs: idmapped layers are currently not supported
	[Oct25 09:45] kauditd_printk_skb: 8 callbacks suppressed
	[Oct25 09:47] overlayfs: idmapped layers are currently not supported
	[Oct25 09:53] overlayfs: idmapped layers are currently not supported
	[Oct25 09:54] overlayfs: idmapped layers are currently not supported
	[Oct25 10:07] overlayfs: idmapped layers are currently not supported
	[Oct25 10:08] overlayfs: idmapped layers are currently not supported
	[Oct25 10:09] overlayfs: idmapped layers are currently not supported
	[Oct25 10:11] overlayfs: idmapped layers are currently not supported
	[Oct25 10:12] overlayfs: idmapped layers are currently not supported
	[Oct25 10:13] overlayfs: idmapped layers are currently not supported
	[  +4.737500] overlayfs: idmapped layers are currently not supported
	[Oct25 10:14] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [07e7673199f69cfda9e91af2a66aad345a2ce7a92130398dd12fc4e17470e088] <==
	{"level":"warn","ts":"2025-10-25T10:21:34.910075Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"192220adada3ae40","rtt":"70.711228ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:34.910094Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"192220adada3ae40","rtt":"70.888112ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:38.588624Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"192220adada3ae40","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:38.588686Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"192220adada3ae40","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:39.911123Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"192220adada3ae40","rtt":"70.888112ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:39.911133Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"192220adada3ae40","rtt":"70.711228ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:42.590215Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"192220adada3ae40","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:42.590268Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"192220adada3ae40","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:44.911813Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"192220adada3ae40","rtt":"70.711228ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:44.911830Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"192220adada3ae40","rtt":"70.888112ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:46.592415Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"192220adada3ae40","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:46.592493Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"192220adada3ae40","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-25T10:21:49.641017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:43688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:21:49.666847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:43690","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T10:21:49.724815Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(112157709692785404 12593026477526642892)"}
	{"level":"info","ts":"2025-10-25T10:21:49.726949Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"192220adada3ae40","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-10-25T10:21:49.727076Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"192220adada3ae40"}
	{"level":"info","ts":"2025-10-25T10:21:49.727124Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"192220adada3ae40"}
	{"level":"info","ts":"2025-10-25T10:21:49.727194Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"192220adada3ae40"}
	{"level":"info","ts":"2025-10-25T10:21:49.727248Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"192220adada3ae40"}
	{"level":"info","ts":"2025-10-25T10:21:49.727316Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"192220adada3ae40"}
	{"level":"info","ts":"2025-10-25T10:21:49.727359Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"192220adada3ae40"}
	{"level":"info","ts":"2025-10-25T10:21:49.727410Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"192220adada3ae40"}
	{"level":"info","ts":"2025-10-25T10:21:49.727459Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"192220adada3ae40"}
	{"level":"info","ts":"2025-10-25T10:21:49.727532Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"192220adada3ae40"}
	
	
	==> kernel <==
	 10:21:59 up  2:04,  0 user,  load average: 1.99, 1.65, 1.68
	Linux ha-480889 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fbcc0424a1c5f8864ade5ed9949267a842ff3cf9126f862facc9e1aa5eacffff] <==
	I1025 10:21:27.558620       1 main.go:324] Node ha-480889-m03 has CIDR [10.244.3.0/24] 
	I1025 10:21:27.558936       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1025 10:21:27.559025       1 main.go:324] Node ha-480889-m04 has CIDR [10.244.4.0/24] 
	I1025 10:21:37.558162       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 10:21:37.558266       1 main.go:301] handling current node
	I1025 10:21:37.558288       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1025 10:21:37.558296       1 main.go:324] Node ha-480889-m02 has CIDR [10.244.1.0/24] 
	I1025 10:21:37.558464       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1025 10:21:37.558476       1 main.go:324] Node ha-480889-m03 has CIDR [10.244.3.0/24] 
	I1025 10:21:37.558535       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1025 10:21:37.558545       1 main.go:324] Node ha-480889-m04 has CIDR [10.244.4.0/24] 
	I1025 10:21:47.554097       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1025 10:21:47.554168       1 main.go:324] Node ha-480889-m03 has CIDR [10.244.3.0/24] 
	I1025 10:21:47.554480       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1025 10:21:47.554498       1 main.go:324] Node ha-480889-m04 has CIDR [10.244.4.0/24] 
	I1025 10:21:47.554611       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 10:21:47.554627       1 main.go:301] handling current node
	I1025 10:21:47.554640       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1025 10:21:47.554645       1 main.go:324] Node ha-480889-m02 has CIDR [10.244.1.0/24] 
	I1025 10:21:57.555400       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 10:21:57.555434       1 main.go:301] handling current node
	I1025 10:21:57.555451       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1025 10:21:57.555457       1 main.go:324] Node ha-480889-m02 has CIDR [10.244.1.0/24] 
	I1025 10:21:57.555830       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1025 10:21:57.555854       1 main.go:324] Node ha-480889-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [170a3a9364b5079051bd3c5c594733a45ac4ddd6193638cc413453308f5c0fac] <==
	I1025 10:14:03.967743       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 10:14:03.967829       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 10:14:03.973267       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:14:04.026613       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:14:04.032481       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 10:14:04.052238       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 10:14:04.052324       1 policy_source.go:240] refreshing policies
	I1025 10:14:04.058097       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:14:04.061386       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 10:14:04.072018       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:14:04.072031       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 10:14:04.084688       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	W1025 10:14:04.098372       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1025 10:14:04.099889       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:14:04.119656       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 10:14:04.126503       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:14:04.130630       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 10:14:04.130701       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:14:04.131677       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1025 10:14:04.134712       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	W1025 10:14:05.447579       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1025 10:14:06.750594       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:14:38.917463       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:14:48.841975       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:15:01.983934       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [3d17e8c3e629ce1a8cc189e9334fe0f0ede8346a9b11bb7ab70d582f3df753dd] <==
	I1025 10:14:23.281949       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 10:14:23.286288       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 10:14:23.286403       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:14:23.299410       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 10:14:23.300876       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 10:14:23.306254       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:14:23.306381       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-480889-m04"
	I1025 10:14:23.311210       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 10:14:23.312754       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 10:14:23.313149       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 10:14:23.313534       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-480889-m02"
	I1025 10:14:23.313587       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-480889-m03"
	I1025 10:14:23.313614       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-480889-m04"
	I1025 10:14:23.313645       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-480889"
	I1025 10:14:23.313684       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 10:14:23.317005       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:14:23.344025       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:14:23.367963       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:14:23.368079       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:14:23.368095       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:14:38.895946       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-q2vqt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-q2vqt\": the object has been modified; please apply your changes to the latest version and try again"
	I1025 10:14:38.896149       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"f9cd3e42-b9dd-4a9e-9497-cb7c76655b63", APIVersion:"v1", ResourceVersion:"304", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-q2vqt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-q2vqt": the object has been modified; please apply your changes to the latest version and try again
	I1025 10:14:48.851349       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-q2vqt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-q2vqt\": the object has been modified; please apply your changes to the latest version and try again"
	I1025 10:14:48.851413       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"f9cd3e42-b9dd-4a9e-9497-cb7c76655b63", APIVersion:"v1", ResourceVersion:"304", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-q2vqt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-q2vqt": the object has been modified; please apply your changes to the latest version and try again
	I1025 10:20:12.113935       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-gzkw5"
	
	
	==> kube-controller-manager [3d23dbb42715f1bb9050f4885b532a8877bf1805090fe8f7db8038e263d7391a] <==
	I1025 10:13:51.570432       1 serving.go:386] Generated self-signed cert in-memory
	I1025 10:13:52.993227       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1025 10:13:52.993297       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:13:52.996751       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1025 10:13:52.996842       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1025 10:13:52.996860       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1025 10:13:52.996872       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1025 10:14:03.024456       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [2c07e2732f356b8a475ac49d8754bc57a66b40d6244caf09ba433eb3a403de55] <==
	I1025 10:14:07.748150       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:14:08.033949       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:14:08.140133       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:14:08.140226       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 10:14:08.140327       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:14:08.195750       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:14:08.196249       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:14:08.211646       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:14:08.212020       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:14:08.212082       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:14:08.213291       1 config.go:200] "Starting service config controller"
	I1025 10:14:08.217712       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:14:08.217781       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:14:08.217809       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:14:08.217847       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:14:08.217874       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:14:08.218634       1 config.go:309] "Starting node config controller"
	I1025 10:14:08.218703       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:14:08.218734       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:14:08.317931       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 10:14:08.318088       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:14:08.318104       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [322c2cc726dbd336dc6d64af52ed0d7374e34249ef33e160f4bc633c2590c50d] <==
	E1025 10:13:49.391976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:13:49.571750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 10:13:50.070254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 10:13:50.309013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 10:13:50.582259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:13:54.319884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:13:55.549322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:13:55.831850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 10:13:56.712002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 10:13:56.744458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 10:13:56.861512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 10:13:57.774322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:13:58.224119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 10:13:58.380289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 10:13:58.672474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:13:58.770260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:13:58.898191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 10:13:59.065845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 10:13:59.239221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 10:13:59.371086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 10:13:59.586515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 10:13:59.607070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 10:14:00.498025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:14:01.364396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1025 10:14:19.077422       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.542088     798 apiserver.go:52] "Watching apiserver"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.543114     798 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.562505     798 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-480889" podUID="07959933-b7f0-46ad-9fa2-d9c661db7882"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.565354     798 scope.go:117] "RemoveContainer" containerID="3d23dbb42715f1bb9050f4885b532a8877bf1805090fe8f7db8038e263d7391a"
	Oct 25 10:14:06 ha-480889 kubelet[798]: E1025 10:14:06.567060     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-480889_kube-system(9a81b87b3b974d940626f18d45a6aab1)\"" pod="kube-system/kube-controller-manager-ha-480889" podUID="9a81b87b3b974d940626f18d45a6aab1"
	Oct 25 10:14:06 ha-480889 kubelet[798]: E1025 10:14:06.615909     798 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-480889\" already exists" pod="kube-system/etcd-ha-480889"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.623931     798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f73a13738b45c11bf39c58ec6843885" path="/var/lib/kubelet/pods/4f73a13738b45c11bf39c58ec6843885/volumes"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.638563     798 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.700998     798 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-480889"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.701172     798 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-480889"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.705440     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/13833b7e-6794-4f30-8bec-20375bd481f2-cni-cfg\") pod \"kindnet-8fgmd\" (UID: \"13833b7e-6794-4f30-8bec-20375bd481f2\") " pod="kube-system/kindnet-8fgmd"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.705602     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13833b7e-6794-4f30-8bec-20375bd481f2-xtables-lock\") pod \"kindnet-8fgmd\" (UID: \"13833b7e-6794-4f30-8bec-20375bd481f2\") " pod="kube-system/kindnet-8fgmd"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.705700     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e73b3f75-02d7-46e3-940c-ffd727e4c87d-lib-modules\") pod \"kube-proxy-6x5rb\" (UID: \"e73b3f75-02d7-46e3-940c-ffd727e4c87d\") " pod="kube-system/kube-proxy-6x5rb"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.705777     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13833b7e-6794-4f30-8bec-20375bd481f2-lib-modules\") pod \"kindnet-8fgmd\" (UID: \"13833b7e-6794-4f30-8bec-20375bd481f2\") " pod="kube-system/kindnet-8fgmd"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.705902     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e73b3f75-02d7-46e3-940c-ffd727e4c87d-xtables-lock\") pod \"kube-proxy-6x5rb\" (UID: \"e73b3f75-02d7-46e3-940c-ffd727e4c87d\") " pod="kube-system/kube-proxy-6x5rb"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.706049     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/15113825-bb63-434f-bd5e-2ffd789452d6-tmp\") pod \"storage-provisioner\" (UID: \"15113825-bb63-434f-bd5e-2ffd789452d6\") " pod="kube-system/storage-provisioner"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.763587     798 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 25 10:14:06 ha-480889 kubelet[798]: I1025 10:14:06.863460     798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-480889" podStartSLOduration=0.863439797 podStartE2EDuration="863.439797ms" podCreationTimestamp="2025-10-25 10:14:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:14:06.837585194 +0000 UTC m=+38.428191520" watchObservedRunningTime="2025-10-25 10:14:06.863439797 +0000 UTC m=+38.454046098"
	Oct 25 10:14:07 ha-480889 kubelet[798]: W1025 10:14:07.082204     798 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb/crio-221f4b21ed8c28b6fd1698347efb2e67bd612d196fc843d8d64f3be9c60b2221 WatchSource:0}: Error finding container 221f4b21ed8c28b6fd1698347efb2e67bd612d196fc843d8d64f3be9c60b2221: Status 404 returned error can't find the container with id 221f4b21ed8c28b6fd1698347efb2e67bd612d196fc843d8d64f3be9c60b2221
	Oct 25 10:14:08 ha-480889 kubelet[798]: I1025 10:14:08.400429     798 scope.go:117] "RemoveContainer" containerID="3d23dbb42715f1bb9050f4885b532a8877bf1805090fe8f7db8038e263d7391a"
	Oct 25 10:14:08 ha-480889 kubelet[798]: E1025 10:14:08.400599     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-480889_kube-system(9a81b87b3b974d940626f18d45a6aab1)\"" pod="kube-system/kube-controller-manager-ha-480889" podUID="9a81b87b3b974d940626f18d45a6aab1"
	Oct 25 10:14:20 ha-480889 kubelet[798]: I1025 10:14:20.620615     798 scope.go:117] "RemoveContainer" containerID="3d23dbb42715f1bb9050f4885b532a8877bf1805090fe8f7db8038e263d7391a"
	Oct 25 10:14:28 ha-480889 kubelet[798]: E1025 10:14:28.527965     798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"863261f33854b355adb7cf1877fbfb03718ee4a2555370d2149d1cdc448496bb\": container with ID starting with 863261f33854b355adb7cf1877fbfb03718ee4a2555370d2149d1cdc448496bb not found: ID does not exist" containerID="863261f33854b355adb7cf1877fbfb03718ee4a2555370d2149d1cdc448496bb"
	Oct 25 10:14:28 ha-480889 kubelet[798]: I1025 10:14:28.528089     798 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="863261f33854b355adb7cf1877fbfb03718ee4a2555370d2149d1cdc448496bb" err="rpc error: code = NotFound desc = could not find container \"863261f33854b355adb7cf1877fbfb03718ee4a2555370d2149d1cdc448496bb\": container with ID starting with 863261f33854b355adb7cf1877fbfb03718ee4a2555370d2149d1cdc448496bb not found: ID does not exist"
	Oct 25 10:14:37 ha-480889 kubelet[798]: I1025 10:14:37.870067     798 scope.go:117] "RemoveContainer" containerID="9e45eacfcf479b2839ca5aa015423a2b920806c92232de9220ff03c17f84e584"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-480889 -n ha-480889
helpers_test.go:269: (dbg) Run:  kubectl --context ha-480889 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-q5kt7
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-480889 describe pod busybox-7b57f96db7-q5kt7
helpers_test.go:290: (dbg) kubectl --context ha-480889 describe pod busybox-7b57f96db7-q5kt7:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-q5kt7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r5xf9 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-r5xf9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  108s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  108s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  11s   default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  11s   default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (3.43s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.98s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-974051 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-974051 --output=json --user=testUser: exit status 80 (1.980882411s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"12f382f9-815c-4711-b848-db3d9d8daf06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-974051 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"15b29a6d-a0e2-43cb-9e36-29f905254bbf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-25T10:26:51Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"aa0735ab-e22c-410b-a513-96715f72446c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-974051 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.98s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.85s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-974051 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-974051 --output=json --user=testUser: exit status 80 (1.849355316s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cf93766e-fce5-4433-922f-ab33845cfd6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-974051 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"2671a914-5c29-40fe-9ebf-d60a4b6cc2fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-25T10:26:53Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"69fa074c-34b3-4341-94bd-8c8851126347","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-974051 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.85s)

                                                
                                    
x
+
TestPause/serial/Pause (7.88s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-494622 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-494622 --alsologtostderr -v=5: exit status 80 (2.432173144s)

                                                
                                                
-- stdout --
	* Pausing node pause-494622 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:49:07.466240  422306 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:49:07.467104  422306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:49:07.467135  422306 out.go:374] Setting ErrFile to fd 2...
	I1025 10:49:07.467155  422306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:49:07.467437  422306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:49:07.467735  422306 out.go:368] Setting JSON to false
	I1025 10:49:07.467786  422306 mustload.go:65] Loading cluster: pause-494622
	I1025 10:49:07.468233  422306 config.go:182] Loaded profile config "pause-494622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:49:07.468709  422306 cli_runner.go:164] Run: docker container inspect pause-494622 --format={{.State.Status}}
	I1025 10:49:07.486348  422306 host.go:66] Checking if "pause-494622" exists ...
	I1025 10:49:07.486715  422306 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:49:07.570991  422306 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 10:49:07.560707732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:49:07.571624  422306 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-494622 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 10:49:07.575064  422306 out.go:179] * Pausing node pause-494622 ... 
	I1025 10:49:07.578806  422306 host.go:66] Checking if "pause-494622" exists ...
	I1025 10:49:07.579336  422306 ssh_runner.go:195] Run: systemctl --version
	I1025 10:49:07.579391  422306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-494622
	I1025 10:49:07.600996  422306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/pause-494622/id_rsa Username:docker}
	I1025 10:49:07.708538  422306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:49:07.722712  422306 pause.go:52] kubelet running: true
	I1025 10:49:07.722782  422306 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:49:08.011519  422306 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:49:08.011615  422306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:49:08.107225  422306 cri.go:89] found id: "b30e317be3539ac55cad517b2c4bbfd1d83ba79be6b363594a6726c56ecba536"
	I1025 10:49:08.107250  422306 cri.go:89] found id: "96c83536eb5174d663ace1cf0fadd0dacdd66f200c39a0bdd9cc3de480273b9a"
	I1025 10:49:08.107256  422306 cri.go:89] found id: "0411316bf38375595d1010d1a674f0717b0b515e8d8abbbff9e7ea89d8444814"
	I1025 10:49:08.107260  422306 cri.go:89] found id: "336fcce7f177ee63099c95f463857f65a6c8674b4cae330456af35a66d1e5927"
	I1025 10:49:08.107263  422306 cri.go:89] found id: "6a52960107a32d9d63c9a726cde40d6bc306416bd0198608ded2c7804daad2a9"
	I1025 10:49:08.107267  422306 cri.go:89] found id: "a2575b358a4844d89fec42a8040e731bf10578ac0841857c5d57c9f3d436492e"
	I1025 10:49:08.107271  422306 cri.go:89] found id: "9dd9f6a0583890e7ee49e45dee555a894f7bebf9e5043ef4e4d76611b6528f01"
	I1025 10:49:08.107274  422306 cri.go:89] found id: "0dacb3499bb498eb60afdc5550e70098c64ba1e92df1f33f6f5990e014b49766"
	I1025 10:49:08.107278  422306 cri.go:89] found id: "0a2ac9c53256707ef6dd02317248b4d542d804a6bc9fa4ffe7fcf73c2e0e74ba"
	I1025 10:49:08.107285  422306 cri.go:89] found id: "3e2ff0d6a6cab5c143e5ccbf87b8ae6a4d27061da41107c8817afeec41eb6940"
	I1025 10:49:08.107289  422306 cri.go:89] found id: "5a391da839348564e6c59f05bd1af2867b2ba66f17ea3ba8731f53c762dce341"
	I1025 10:49:08.107294  422306 cri.go:89] found id: "4f17ef8ba1aa56544d98deddadc6648233f1aa7f176fb6f9cb061a02e556af0f"
	I1025 10:49:08.107301  422306 cri.go:89] found id: "ee7dbc55c95114fc27b76a23f146b7b3cdf19a29f316645a7438a38ba79d5fca"
	I1025 10:49:08.107305  422306 cri.go:89] found id: "56698e4599135d8d0d3a8b15f20fb0fcbcf302ce721ba8c99956c5c54be1673d"
	I1025 10:49:08.107308  422306 cri.go:89] found id: ""
	I1025 10:49:08.107357  422306 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:49:08.119597  422306 retry.go:31] will retry after 315.615609ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:49:08Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:49:08.436241  422306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:49:08.449819  422306 pause.go:52] kubelet running: false
	I1025 10:49:08.449909  422306 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:49:08.586307  422306 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:49:08.586412  422306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:49:08.652155  422306 cri.go:89] found id: "b30e317be3539ac55cad517b2c4bbfd1d83ba79be6b363594a6726c56ecba536"
	I1025 10:49:08.652188  422306 cri.go:89] found id: "96c83536eb5174d663ace1cf0fadd0dacdd66f200c39a0bdd9cc3de480273b9a"
	I1025 10:49:08.652193  422306 cri.go:89] found id: "0411316bf38375595d1010d1a674f0717b0b515e8d8abbbff9e7ea89d8444814"
	I1025 10:49:08.652197  422306 cri.go:89] found id: "336fcce7f177ee63099c95f463857f65a6c8674b4cae330456af35a66d1e5927"
	I1025 10:49:08.652200  422306 cri.go:89] found id: "6a52960107a32d9d63c9a726cde40d6bc306416bd0198608ded2c7804daad2a9"
	I1025 10:49:08.652203  422306 cri.go:89] found id: "a2575b358a4844d89fec42a8040e731bf10578ac0841857c5d57c9f3d436492e"
	I1025 10:49:08.652208  422306 cri.go:89] found id: "9dd9f6a0583890e7ee49e45dee555a894f7bebf9e5043ef4e4d76611b6528f01"
	I1025 10:49:08.652211  422306 cri.go:89] found id: "0dacb3499bb498eb60afdc5550e70098c64ba1e92df1f33f6f5990e014b49766"
	I1025 10:49:08.652214  422306 cri.go:89] found id: "0a2ac9c53256707ef6dd02317248b4d542d804a6bc9fa4ffe7fcf73c2e0e74ba"
	I1025 10:49:08.652220  422306 cri.go:89] found id: "3e2ff0d6a6cab5c143e5ccbf87b8ae6a4d27061da41107c8817afeec41eb6940"
	I1025 10:49:08.652224  422306 cri.go:89] found id: "5a391da839348564e6c59f05bd1af2867b2ba66f17ea3ba8731f53c762dce341"
	I1025 10:49:08.652228  422306 cri.go:89] found id: "4f17ef8ba1aa56544d98deddadc6648233f1aa7f176fb6f9cb061a02e556af0f"
	I1025 10:49:08.652247  422306 cri.go:89] found id: "ee7dbc55c95114fc27b76a23f146b7b3cdf19a29f316645a7438a38ba79d5fca"
	I1025 10:49:08.652256  422306 cri.go:89] found id: "56698e4599135d8d0d3a8b15f20fb0fcbcf302ce721ba8c99956c5c54be1673d"
	I1025 10:49:08.652260  422306 cri.go:89] found id: ""
	I1025 10:49:08.652309  422306 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:49:08.663923  422306 retry.go:31] will retry after 253.304217ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:49:08Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:49:08.918205  422306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:49:08.933956  422306 pause.go:52] kubelet running: false
	I1025 10:49:08.934071  422306 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:49:09.084240  422306 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:49:09.084319  422306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:49:09.153239  422306 cri.go:89] found id: "b30e317be3539ac55cad517b2c4bbfd1d83ba79be6b363594a6726c56ecba536"
	I1025 10:49:09.153263  422306 cri.go:89] found id: "96c83536eb5174d663ace1cf0fadd0dacdd66f200c39a0bdd9cc3de480273b9a"
	I1025 10:49:09.153268  422306 cri.go:89] found id: "0411316bf38375595d1010d1a674f0717b0b515e8d8abbbff9e7ea89d8444814"
	I1025 10:49:09.153272  422306 cri.go:89] found id: "336fcce7f177ee63099c95f463857f65a6c8674b4cae330456af35a66d1e5927"
	I1025 10:49:09.153275  422306 cri.go:89] found id: "6a52960107a32d9d63c9a726cde40d6bc306416bd0198608ded2c7804daad2a9"
	I1025 10:49:09.153280  422306 cri.go:89] found id: "a2575b358a4844d89fec42a8040e731bf10578ac0841857c5d57c9f3d436492e"
	I1025 10:49:09.153283  422306 cri.go:89] found id: "9dd9f6a0583890e7ee49e45dee555a894f7bebf9e5043ef4e4d76611b6528f01"
	I1025 10:49:09.153287  422306 cri.go:89] found id: "0dacb3499bb498eb60afdc5550e70098c64ba1e92df1f33f6f5990e014b49766"
	I1025 10:49:09.153298  422306 cri.go:89] found id: "0a2ac9c53256707ef6dd02317248b4d542d804a6bc9fa4ffe7fcf73c2e0e74ba"
	I1025 10:49:09.153306  422306 cri.go:89] found id: "3e2ff0d6a6cab5c143e5ccbf87b8ae6a4d27061da41107c8817afeec41eb6940"
	I1025 10:49:09.153310  422306 cri.go:89] found id: "5a391da839348564e6c59f05bd1af2867b2ba66f17ea3ba8731f53c762dce341"
	I1025 10:49:09.153313  422306 cri.go:89] found id: "4f17ef8ba1aa56544d98deddadc6648233f1aa7f176fb6f9cb061a02e556af0f"
	I1025 10:49:09.153317  422306 cri.go:89] found id: "ee7dbc55c95114fc27b76a23f146b7b3cdf19a29f316645a7438a38ba79d5fca"
	I1025 10:49:09.153330  422306 cri.go:89] found id: "56698e4599135d8d0d3a8b15f20fb0fcbcf302ce721ba8c99956c5c54be1673d"
	I1025 10:49:09.153336  422306 cri.go:89] found id: ""
	I1025 10:49:09.153386  422306 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:49:09.164572  422306 retry.go:31] will retry after 345.314805ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:49:09Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:49:09.510121  422306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:49:09.523267  422306 pause.go:52] kubelet running: false
	I1025 10:49:09.523413  422306 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:49:09.670877  422306 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:49:09.671001  422306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:49:09.736973  422306 cri.go:89] found id: "b30e317be3539ac55cad517b2c4bbfd1d83ba79be6b363594a6726c56ecba536"
	I1025 10:49:09.737006  422306 cri.go:89] found id: "96c83536eb5174d663ace1cf0fadd0dacdd66f200c39a0bdd9cc3de480273b9a"
	I1025 10:49:09.737011  422306 cri.go:89] found id: "0411316bf38375595d1010d1a674f0717b0b515e8d8abbbff9e7ea89d8444814"
	I1025 10:49:09.737015  422306 cri.go:89] found id: "336fcce7f177ee63099c95f463857f65a6c8674b4cae330456af35a66d1e5927"
	I1025 10:49:09.737047  422306 cri.go:89] found id: "6a52960107a32d9d63c9a726cde40d6bc306416bd0198608ded2c7804daad2a9"
	I1025 10:49:09.737060  422306 cri.go:89] found id: "a2575b358a4844d89fec42a8040e731bf10578ac0841857c5d57c9f3d436492e"
	I1025 10:49:09.737064  422306 cri.go:89] found id: "9dd9f6a0583890e7ee49e45dee555a894f7bebf9e5043ef4e4d76611b6528f01"
	I1025 10:49:09.737067  422306 cri.go:89] found id: "0dacb3499bb498eb60afdc5550e70098c64ba1e92df1f33f6f5990e014b49766"
	I1025 10:49:09.737085  422306 cri.go:89] found id: "0a2ac9c53256707ef6dd02317248b4d542d804a6bc9fa4ffe7fcf73c2e0e74ba"
	I1025 10:49:09.737105  422306 cri.go:89] found id: "3e2ff0d6a6cab5c143e5ccbf87b8ae6a4d27061da41107c8817afeec41eb6940"
	I1025 10:49:09.737123  422306 cri.go:89] found id: "5a391da839348564e6c59f05bd1af2867b2ba66f17ea3ba8731f53c762dce341"
	I1025 10:49:09.737126  422306 cri.go:89] found id: "4f17ef8ba1aa56544d98deddadc6648233f1aa7f176fb6f9cb061a02e556af0f"
	I1025 10:49:09.737130  422306 cri.go:89] found id: "ee7dbc55c95114fc27b76a23f146b7b3cdf19a29f316645a7438a38ba79d5fca"
	I1025 10:49:09.737133  422306 cri.go:89] found id: "56698e4599135d8d0d3a8b15f20fb0fcbcf302ce721ba8c99956c5c54be1673d"
	I1025 10:49:09.737136  422306 cri.go:89] found id: ""
	I1025 10:49:09.737201  422306 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:49:09.752228  422306 out.go:203] 
	W1025 10:49:09.755201  422306 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:49:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:49:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 10:49:09.755228  422306 out.go:285] * 
	* 
	W1025 10:49:09.822365  422306 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 10:49:09.825514  422306 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-494622 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-494622
helpers_test.go:243: (dbg) docker inspect pause-494622:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8561594a4e294239bfac943f8b5aa947c95647235e7938e3ba1416b432caf0bc",
	        "Created": "2025-10-25T10:47:18.505812885Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 415967,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:47:18.579543096Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/8561594a4e294239bfac943f8b5aa947c95647235e7938e3ba1416b432caf0bc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8561594a4e294239bfac943f8b5aa947c95647235e7938e3ba1416b432caf0bc/hostname",
	        "HostsPath": "/var/lib/docker/containers/8561594a4e294239bfac943f8b5aa947c95647235e7938e3ba1416b432caf0bc/hosts",
	        "LogPath": "/var/lib/docker/containers/8561594a4e294239bfac943f8b5aa947c95647235e7938e3ba1416b432caf0bc/8561594a4e294239bfac943f8b5aa947c95647235e7938e3ba1416b432caf0bc-json.log",
	        "Name": "/pause-494622",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-494622:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-494622",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8561594a4e294239bfac943f8b5aa947c95647235e7938e3ba1416b432caf0bc",
	                "LowerDir": "/var/lib/docker/overlay2/7cc3bd1ba4fbb850dd711949a736ec073b990dc7d577bb924e008bea21c85970-init/diff:/var/lib/docker/overlay2/a746402fafa86965bd9394d930bb96ec16982037f9f5e7f4f1de6d09576ff851/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7cc3bd1ba4fbb850dd711949a736ec073b990dc7d577bb924e008bea21c85970/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7cc3bd1ba4fbb850dd711949a736ec073b990dc7d577bb924e008bea21c85970/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7cc3bd1ba4fbb850dd711949a736ec073b990dc7d577bb924e008bea21c85970/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-494622",
	                "Source": "/var/lib/docker/volumes/pause-494622/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-494622",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-494622",
	                "name.minikube.sigs.k8s.io": "pause-494622",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8f739a8ad6c0b599900149ebfd50aa15c33cc876a37eea479c29d2a1bad72969",
	            "SandboxKey": "/var/run/docker/netns/8f739a8ad6c0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33383"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33384"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33387"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33385"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33386"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-494622": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:1d:e2:27:6b:7f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d429c36bc6aeac81af962db139f6a44aa42c438edc8e849f181aeb21cc399667",
	                    "EndpointID": "666e17c355efdba59f5f69d9e31743b8181b120e6f0fd90aeed7607f991c2f82",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-494622",
	                        "8561594a4e29"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-494622 -n pause-494622
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-494622 -n pause-494622: exit status 2 (337.101454ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-494622 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-494622 logs -n 25: (1.694177694s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-670512 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-670512       │ jenkins │ v1.37.0 │ 25 Oct 25 10:43 UTC │ 25 Oct 25 10:43 UTC │
	│ start   │ -p missing-upgrade-486371 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-486371    │ jenkins │ v1.32.0 │ 25 Oct 25 10:43 UTC │ 25 Oct 25 10:44 UTC │
	│ start   │ -p NoKubernetes-670512 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-670512       │ jenkins │ v1.37.0 │ 25 Oct 25 10:43 UTC │ 25 Oct 25 10:44 UTC │
	│ start   │ -p missing-upgrade-486371 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-486371    │ jenkins │ v1.37.0 │ 25 Oct 25 10:44 UTC │ 25 Oct 25 10:45 UTC │
	│ delete  │ -p NoKubernetes-670512                                                                                                                   │ NoKubernetes-670512       │ jenkins │ v1.37.0 │ 25 Oct 25 10:44 UTC │ 25 Oct 25 10:44 UTC │
	│ start   │ -p NoKubernetes-670512 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-670512       │ jenkins │ v1.37.0 │ 25 Oct 25 10:44 UTC │ 25 Oct 25 10:44 UTC │
	│ ssh     │ -p NoKubernetes-670512 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-670512       │ jenkins │ v1.37.0 │ 25 Oct 25 10:44 UTC │                     │
	│ stop    │ -p NoKubernetes-670512                                                                                                                   │ NoKubernetes-670512       │ jenkins │ v1.37.0 │ 25 Oct 25 10:44 UTC │ 25 Oct 25 10:44 UTC │
	│ start   │ -p NoKubernetes-670512 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-670512       │ jenkins │ v1.37.0 │ 25 Oct 25 10:44 UTC │ 25 Oct 25 10:44 UTC │
	│ ssh     │ -p NoKubernetes-670512 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-670512       │ jenkins │ v1.37.0 │ 25 Oct 25 10:44 UTC │                     │
	│ delete  │ -p NoKubernetes-670512                                                                                                                   │ NoKubernetes-670512       │ jenkins │ v1.37.0 │ 25 Oct 25 10:44 UTC │ 25 Oct 25 10:44 UTC │
	│ start   │ -p kubernetes-upgrade-291330 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-291330 │ jenkins │ v1.37.0 │ 25 Oct 25 10:44 UTC │ 25 Oct 25 10:45 UTC │
	│ delete  │ -p missing-upgrade-486371                                                                                                                │ missing-upgrade-486371    │ jenkins │ v1.37.0 │ 25 Oct 25 10:45 UTC │ 25 Oct 25 10:45 UTC │
	│ start   │ -p stopped-upgrade-190411 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-190411    │ jenkins │ v1.32.0 │ 25 Oct 25 10:45 UTC │ 25 Oct 25 10:45 UTC │
	│ stop    │ -p kubernetes-upgrade-291330                                                                                                             │ kubernetes-upgrade-291330 │ jenkins │ v1.37.0 │ 25 Oct 25 10:45 UTC │ 25 Oct 25 10:45 UTC │
	│ start   │ -p kubernetes-upgrade-291330 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-291330 │ jenkins │ v1.37.0 │ 25 Oct 25 10:45 UTC │                     │
	│ stop    │ stopped-upgrade-190411 stop                                                                                                              │ stopped-upgrade-190411    │ jenkins │ v1.32.0 │ 25 Oct 25 10:45 UTC │ 25 Oct 25 10:45 UTC │
	│ start   │ -p stopped-upgrade-190411 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-190411    │ jenkins │ v1.37.0 │ 25 Oct 25 10:45 UTC │ 25 Oct 25 10:46 UTC │
	│ delete  │ -p stopped-upgrade-190411                                                                                                                │ stopped-upgrade-190411    │ jenkins │ v1.37.0 │ 25 Oct 25 10:46 UTC │ 25 Oct 25 10:46 UTC │
	│ start   │ -p running-upgrade-031456 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-031456    │ jenkins │ v1.32.0 │ 25 Oct 25 10:46 UTC │ 25 Oct 25 10:46 UTC │
	│ start   │ -p running-upgrade-031456 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-031456    │ jenkins │ v1.37.0 │ 25 Oct 25 10:46 UTC │ 25 Oct 25 10:47 UTC │
	│ delete  │ -p running-upgrade-031456                                                                                                                │ running-upgrade-031456    │ jenkins │ v1.37.0 │ 25 Oct 25 10:47 UTC │ 25 Oct 25 10:47 UTC │
	│ start   │ -p pause-494622 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-494622              │ jenkins │ v1.37.0 │ 25 Oct 25 10:47 UTC │ 25 Oct 25 10:48 UTC │
	│ start   │ -p pause-494622 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-494622              │ jenkins │ v1.37.0 │ 25 Oct 25 10:48 UTC │ 25 Oct 25 10:49 UTC │
	│ pause   │ -p pause-494622 --alsologtostderr -v=5                                                                                                   │ pause-494622              │ jenkins │ v1.37.0 │ 25 Oct 25 10:49 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:48:36
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:48:36.665632  420177 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:48:36.665842  420177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:48:36.665871  420177 out.go:374] Setting ErrFile to fd 2...
	I1025 10:48:36.665893  420177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:48:36.666376  420177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:48:36.667381  420177 out.go:368] Setting JSON to false
	I1025 10:48:36.668364  420177 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9068,"bootTime":1761380249,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 10:48:36.668438  420177 start.go:141] virtualization:  
	I1025 10:48:36.673913  420177 out.go:179] * [pause-494622] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:48:36.676944  420177 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:48:36.677097  420177 notify.go:220] Checking for updates...
	I1025 10:48:36.682728  420177 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:48:36.685593  420177 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:48:36.688639  420177 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 10:48:36.691688  420177 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:48:36.694827  420177 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:48:36.698603  420177 config.go:182] Loaded profile config "pause-494622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:48:36.699397  420177 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:48:36.726096  420177 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:48:36.726268  420177 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:48:36.792574  420177 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 10:48:36.78153526 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:48:36.792692  420177 docker.go:318] overlay module found
	I1025 10:48:36.795880  420177 out.go:179] * Using the docker driver based on existing profile
	I1025 10:48:36.798727  420177 start.go:305] selected driver: docker
	I1025 10:48:36.798759  420177 start.go:925] validating driver "docker" against &{Name:pause-494622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-494622 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:48:36.798892  420177 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:48:36.799029  420177 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:48:36.867538  420177 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 10:48:36.858518127 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:48:36.867967  420177 cni.go:84] Creating CNI manager for ""
	I1025 10:48:36.868032  420177 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:48:36.868081  420177 start.go:349] cluster config:
	{Name:pause-494622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-494622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:48:36.871291  420177 out.go:179] * Starting "pause-494622" primary control-plane node in "pause-494622" cluster
	I1025 10:48:36.874129  420177 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:48:36.877030  420177 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:48:36.879747  420177 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:48:36.879800  420177 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 10:48:36.879814  420177 cache.go:58] Caching tarball of preloaded images
	I1025 10:48:36.879848  420177 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:48:36.879914  420177 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:48:36.879924  420177 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:48:36.880077  420177 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/pause-494622/config.json ...
	I1025 10:48:36.898443  420177 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:48:36.898466  420177 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:48:36.898486  420177 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:48:36.898532  420177 start.go:360] acquireMachinesLock for pause-494622: {Name:mk69e910d428c5e2515675cd602840cb99bca6c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:48:36.898593  420177 start.go:364] duration metric: took 38.261µs to acquireMachinesLock for "pause-494622"
	I1025 10:48:36.898614  420177 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:48:36.898623  420177 fix.go:54] fixHost starting: 
	I1025 10:48:36.898878  420177 cli_runner.go:164] Run: docker container inspect pause-494622 --format={{.State.Status}}
	I1025 10:48:36.915955  420177 fix.go:112] recreateIfNeeded on pause-494622: state=Running err=<nil>
	W1025 10:48:36.915986  420177 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:48:34.294067  407575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:48:34.294546  407575 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 10:48:34.294615  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:48:34.294693  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:48:34.336197  407575 cri.go:89] found id: "9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:34.336219  407575 cri.go:89] found id: ""
	I1025 10:48:34.336228  407575 logs.go:282] 1 containers: [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f]
	I1025 10:48:34.336307  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:34.340674  407575 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:48:34.340786  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:48:34.378735  407575 cri.go:89] found id: ""
	I1025 10:48:34.378763  407575 logs.go:282] 0 containers: []
	W1025 10:48:34.378773  407575 logs.go:284] No container was found matching "etcd"
	I1025 10:48:34.378779  407575 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:48:34.378839  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:48:34.426435  407575 cri.go:89] found id: ""
	I1025 10:48:34.426465  407575 logs.go:282] 0 containers: []
	W1025 10:48:34.426473  407575 logs.go:284] No container was found matching "coredns"
	I1025 10:48:34.426480  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:48:34.426572  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:48:34.468861  407575 cri.go:89] found id: "ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:34.468889  407575 cri.go:89] found id: ""
	I1025 10:48:34.468898  407575 logs.go:282] 1 containers: [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015]
	I1025 10:48:34.468954  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:34.473471  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:48:34.473548  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:48:34.517492  407575 cri.go:89] found id: ""
	I1025 10:48:34.517522  407575 logs.go:282] 0 containers: []
	W1025 10:48:34.517530  407575 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:48:34.517540  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:48:34.517627  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:48:34.549422  407575 cri.go:89] found id: "1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:34.549455  407575 cri.go:89] found id: ""
	I1025 10:48:34.549463  407575 logs.go:282] 1 containers: [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044]
	I1025 10:48:34.549555  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:34.553584  407575 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:48:34.553671  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:48:34.592218  407575 cri.go:89] found id: ""
	I1025 10:48:34.592253  407575 logs.go:282] 0 containers: []
	W1025 10:48:34.592262  407575 logs.go:284] No container was found matching "kindnet"
	I1025 10:48:34.592284  407575 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:48:34.592369  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:48:34.621198  407575 cri.go:89] found id: ""
	I1025 10:48:34.621237  407575 logs.go:282] 0 containers: []
	W1025 10:48:34.621246  407575 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:48:34.621255  407575 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:48:34.621294  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:48:34.716899  407575 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:48:34.716922  407575 logs.go:123] Gathering logs for kube-apiserver [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f] ...
	I1025 10:48:34.716935  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:34.765804  407575 logs.go:123] Gathering logs for kube-scheduler [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015] ...
	I1025 10:48:34.765838  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:34.823684  407575 logs.go:123] Gathering logs for kube-controller-manager [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044] ...
	I1025 10:48:34.823731  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:34.852608  407575 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:48:34.852640  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:48:34.917392  407575 logs.go:123] Gathering logs for container status ...
	I1025 10:48:34.917422  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:48:34.970653  407575 logs.go:123] Gathering logs for kubelet ...
	I1025 10:48:34.970681  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:48:35.094676  407575 logs.go:123] Gathering logs for dmesg ...
	I1025 10:48:35.094714  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:48:36.919252  420177 out.go:252] * Updating the running docker "pause-494622" container ...
	I1025 10:48:36.919288  420177 machine.go:93] provisionDockerMachine start ...
	I1025 10:48:36.919385  420177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-494622
	I1025 10:48:36.936950  420177 main.go:141] libmachine: Using SSH client type: native
	I1025 10:48:36.937286  420177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1025 10:48:36.937296  420177 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:48:37.101485  420177 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-494622
	
	I1025 10:48:37.101511  420177 ubuntu.go:182] provisioning hostname "pause-494622"
	I1025 10:48:37.101575  420177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-494622
	I1025 10:48:37.119455  420177 main.go:141] libmachine: Using SSH client type: native
	I1025 10:48:37.119779  420177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1025 10:48:37.119791  420177 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-494622 && echo "pause-494622" | sudo tee /etc/hostname
	I1025 10:48:37.283195  420177 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-494622
	
	I1025 10:48:37.283288  420177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-494622
	I1025 10:48:37.300664  420177 main.go:141] libmachine: Using SSH client type: native
	I1025 10:48:37.300973  420177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1025 10:48:37.300993  420177 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-494622' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-494622/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-494622' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:48:37.450412  420177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:48:37.450439  420177 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:48:37.450459  420177 ubuntu.go:190] setting up certificates
	I1025 10:48:37.450469  420177 provision.go:84] configureAuth start
	I1025 10:48:37.450548  420177 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-494622
	I1025 10:48:37.470268  420177 provision.go:143] copyHostCerts
	I1025 10:48:37.470340  420177 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:48:37.470358  420177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:48:37.470440  420177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:48:37.470553  420177 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:48:37.470564  420177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:48:37.470591  420177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:48:37.470651  420177 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:48:37.470659  420177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:48:37.470683  420177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:48:37.470737  420177 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.pause-494622 san=[127.0.0.1 192.168.85.2 localhost minikube pause-494622]
	I1025 10:48:38.453397  420177 provision.go:177] copyRemoteCerts
	I1025 10:48:38.453457  420177 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:48:38.453502  420177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-494622
	I1025 10:48:38.475745  420177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/pause-494622/id_rsa Username:docker}
	I1025 10:48:38.591676  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:48:38.610267  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 10:48:38.628844  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:48:38.648176  420177 provision.go:87] duration metric: took 1.19768257s to configureAuth
	I1025 10:48:38.648203  420177 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:48:38.648412  420177 config.go:182] Loaded profile config "pause-494622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:48:38.648591  420177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-494622
	I1025 10:48:38.665309  420177 main.go:141] libmachine: Using SSH client type: native
	I1025 10:48:38.665627  420177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1025 10:48:38.665647  420177 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:48:37.613621  407575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:48:37.614042  407575 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 10:48:37.614083  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:48:37.614136  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:48:37.658485  407575 cri.go:89] found id: "9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:37.658511  407575 cri.go:89] found id: ""
	I1025 10:48:37.658526  407575 logs.go:282] 1 containers: [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f]
	I1025 10:48:37.658590  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:37.663629  407575 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:48:37.663724  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:48:37.748596  407575 cri.go:89] found id: ""
	I1025 10:48:37.748620  407575 logs.go:282] 0 containers: []
	W1025 10:48:37.748637  407575 logs.go:284] No container was found matching "etcd"
	I1025 10:48:37.748643  407575 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:48:37.748716  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:48:37.787592  407575 cri.go:89] found id: ""
	I1025 10:48:37.787615  407575 logs.go:282] 0 containers: []
	W1025 10:48:37.787623  407575 logs.go:284] No container was found matching "coredns"
	I1025 10:48:37.787629  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:48:37.787686  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:48:37.834201  407575 cri.go:89] found id: "ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:37.834232  407575 cri.go:89] found id: ""
	I1025 10:48:37.834240  407575 logs.go:282] 1 containers: [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015]
	I1025 10:48:37.834314  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:37.840204  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:48:37.840302  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:48:37.884578  407575 cri.go:89] found id: ""
	I1025 10:48:37.884657  407575 logs.go:282] 0 containers: []
	W1025 10:48:37.884680  407575 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:48:37.884711  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:48:37.884842  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:48:37.917392  407575 cri.go:89] found id: "1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:37.917411  407575 cri.go:89] found id: ""
	I1025 10:48:37.917419  407575 logs.go:282] 1 containers: [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044]
	I1025 10:48:37.917481  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:37.921268  407575 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:48:37.921339  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:48:37.952332  407575 cri.go:89] found id: ""
	I1025 10:48:37.952354  407575 logs.go:282] 0 containers: []
	W1025 10:48:37.952363  407575 logs.go:284] No container was found matching "kindnet"
	I1025 10:48:37.952370  407575 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:48:37.952495  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:48:37.988299  407575 cri.go:89] found id: ""
	I1025 10:48:37.988320  407575 logs.go:282] 0 containers: []
	W1025 10:48:37.988328  407575 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:48:37.988337  407575 logs.go:123] Gathering logs for kube-apiserver [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f] ...
	I1025 10:48:37.988348  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:38.031930  407575 logs.go:123] Gathering logs for kube-scheduler [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015] ...
	I1025 10:48:38.032040  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:38.114746  407575 logs.go:123] Gathering logs for kube-controller-manager [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044] ...
	I1025 10:48:38.114824  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:38.156161  407575 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:48:38.156191  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:48:38.222174  407575 logs.go:123] Gathering logs for container status ...
	I1025 10:48:38.222214  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:48:38.284269  407575 logs.go:123] Gathering logs for kubelet ...
	I1025 10:48:38.284297  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:48:38.435760  407575 logs.go:123] Gathering logs for dmesg ...
	I1025 10:48:38.435805  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:48:38.457745  407575 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:48:38.457838  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:48:38.565724  407575 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:48:41.065815  407575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:48:41.066333  407575 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 10:48:41.066385  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:48:41.066445  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:48:41.094971  407575 cri.go:89] found id: "9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:41.094995  407575 cri.go:89] found id: ""
	I1025 10:48:41.095003  407575 logs.go:282] 1 containers: [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f]
	I1025 10:48:41.095067  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:41.098624  407575 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:48:41.098700  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:48:41.124322  407575 cri.go:89] found id: ""
	I1025 10:48:41.124344  407575 logs.go:282] 0 containers: []
	W1025 10:48:41.124352  407575 logs.go:284] No container was found matching "etcd"
	I1025 10:48:41.124359  407575 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:48:41.124417  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:48:41.150155  407575 cri.go:89] found id: ""
	I1025 10:48:41.150179  407575 logs.go:282] 0 containers: []
	W1025 10:48:41.150188  407575 logs.go:284] No container was found matching "coredns"
	I1025 10:48:41.150195  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:48:41.150254  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:48:41.179154  407575 cri.go:89] found id: "ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:41.179183  407575 cri.go:89] found id: ""
	I1025 10:48:41.179191  407575 logs.go:282] 1 containers: [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015]
	I1025 10:48:41.179251  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:41.182864  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:48:41.182938  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:48:41.209602  407575 cri.go:89] found id: ""
	I1025 10:48:41.209628  407575 logs.go:282] 0 containers: []
	W1025 10:48:41.209637  407575 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:48:41.209645  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:48:41.209705  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:48:41.247789  407575 cri.go:89] found id: "1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:41.247810  407575 cri.go:89] found id: ""
	I1025 10:48:41.247818  407575 logs.go:282] 1 containers: [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044]
	I1025 10:48:41.247874  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:41.251703  407575 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:48:41.251775  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:48:41.278847  407575 cri.go:89] found id: ""
	I1025 10:48:41.278871  407575 logs.go:282] 0 containers: []
	W1025 10:48:41.278880  407575 logs.go:284] No container was found matching "kindnet"
	I1025 10:48:41.278887  407575 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:48:41.278948  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:48:41.308907  407575 cri.go:89] found id: ""
	I1025 10:48:41.308929  407575 logs.go:282] 0 containers: []
	W1025 10:48:41.308938  407575 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:48:41.308947  407575 logs.go:123] Gathering logs for kube-scheduler [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015] ...
	I1025 10:48:41.308959  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:41.379966  407575 logs.go:123] Gathering logs for kube-controller-manager [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044] ...
	I1025 10:48:41.380002  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:41.405781  407575 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:48:41.405813  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:48:41.464082  407575 logs.go:123] Gathering logs for container status ...
	I1025 10:48:41.464114  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:48:41.494371  407575 logs.go:123] Gathering logs for kubelet ...
	I1025 10:48:41.494403  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:48:41.608100  407575 logs.go:123] Gathering logs for dmesg ...
	I1025 10:48:41.608138  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:48:41.630736  407575 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:48:41.630787  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:48:41.709965  407575 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:48:41.710021  407575 logs.go:123] Gathering logs for kube-apiserver [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f] ...
	I1025 10:48:41.710035  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:43.997521  420177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:48:43.997547  420177 machine.go:96] duration metric: took 7.078250117s to provisionDockerMachine
	I1025 10:48:43.997560  420177 start.go:293] postStartSetup for "pause-494622" (driver="docker")
	I1025 10:48:43.997571  420177 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:48:43.997640  420177 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:48:43.997700  420177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-494622
	I1025 10:48:44.017956  420177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/pause-494622/id_rsa Username:docker}
	I1025 10:48:44.126044  420177 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:48:44.129436  420177 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:48:44.129466  420177 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:48:44.129477  420177 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:48:44.129532  420177 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:48:44.129622  420177 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:48:44.129741  420177 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:48:44.137226  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:48:44.155087  420177 start.go:296] duration metric: took 157.512154ms for postStartSetup
	I1025 10:48:44.155171  420177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:48:44.155238  420177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-494622
	I1025 10:48:44.172794  420177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/pause-494622/id_rsa Username:docker}
	I1025 10:48:44.277229  420177 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:48:44.285283  420177 fix.go:56] duration metric: took 7.386643463s for fixHost
	I1025 10:48:44.285305  420177 start.go:83] releasing machines lock for "pause-494622", held for 7.386701884s
	I1025 10:48:44.285375  420177 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-494622
	I1025 10:48:44.304852  420177 ssh_runner.go:195] Run: cat /version.json
	I1025 10:48:44.304903  420177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-494622
	I1025 10:48:44.305175  420177 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:48:44.305230  420177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-494622
	I1025 10:48:44.342356  420177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/pause-494622/id_rsa Username:docker}
	I1025 10:48:44.348248  420177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/pause-494622/id_rsa Username:docker}
	I1025 10:48:44.466240  420177 ssh_runner.go:195] Run: systemctl --version
	I1025 10:48:44.565845  420177 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:48:44.620137  420177 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:48:44.628162  420177 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:48:44.628237  420177 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:48:44.638542  420177 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:48:44.638573  420177 start.go:495] detecting cgroup driver to use...
	I1025 10:48:44.638606  420177 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:48:44.638666  420177 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:48:44.663975  420177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:48:44.682569  420177 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:48:44.682641  420177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:48:44.704469  420177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:48:44.719329  420177 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:48:44.908590  420177 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:48:45.100215  420177 docker.go:234] disabling docker service ...
	I1025 10:48:45.100316  420177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:48:45.132862  420177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:48:45.151815  420177 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:48:45.333705  420177 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:48:45.514748  420177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:48:45.528517  420177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:48:45.544533  420177 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:48:45.544627  420177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:48:45.554640  420177 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:48:45.554743  420177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:48:45.564491  420177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:48:45.574327  420177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:48:45.583903  420177 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:48:45.593207  420177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:48:45.603025  420177 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:48:45.612551  420177 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:48:45.622075  420177 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:48:45.630267  420177 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:48:45.638184  420177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:48:45.774525  420177 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:48:45.958460  420177 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:48:45.958595  420177 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:48:45.963656  420177 start.go:563] Will wait 60s for crictl version
	I1025 10:48:45.963744  420177 ssh_runner.go:195] Run: which crictl
	I1025 10:48:45.967597  420177 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:48:46.004626  420177 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:48:46.004735  420177 ssh_runner.go:195] Run: crio --version
	I1025 10:48:46.035525  420177 ssh_runner.go:195] Run: crio --version
	I1025 10:48:46.074469  420177 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:48:46.077305  420177 cli_runner.go:164] Run: docker network inspect pause-494622 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:48:46.094130  420177 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 10:48:46.098553  420177 kubeadm.go:883] updating cluster {Name:pause-494622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-494622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:48:46.098709  420177 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:48:46.098771  420177 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:48:46.132479  420177 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:48:46.132504  420177 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:48:46.132566  420177 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:48:46.158479  420177 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:48:46.158509  420177 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:48:46.158518  420177 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1025 10:48:46.158624  420177 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-494622 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-494622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:48:46.158707  420177 ssh_runner.go:195] Run: crio config
	I1025 10:48:46.238449  420177 cni.go:84] Creating CNI manager for ""
	I1025 10:48:46.238536  420177 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:48:46.238578  420177 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:48:46.238631  420177 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-494622 NodeName:pause-494622 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:48:46.238810  420177 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-494622"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:48:46.238904  420177 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:48:46.247438  420177 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:48:46.247590  420177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:48:46.255327  420177 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1025 10:48:46.268523  420177 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:48:46.282281  420177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1025 10:48:46.295265  420177 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:48:46.299131  420177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:48:46.433360  420177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:48:46.447370  420177 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/pause-494622 for IP: 192.168.85.2
	I1025 10:48:46.447390  420177 certs.go:195] generating shared ca certs ...
	I1025 10:48:46.447417  420177 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:48:46.447603  420177 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:48:46.447679  420177 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:48:46.447714  420177 certs.go:257] generating profile certs ...
	I1025 10:48:46.447849  420177 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/pause-494622/client.key
	I1025 10:48:46.447971  420177 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/pause-494622/apiserver.key.46e526a6
	I1025 10:48:46.448055  420177 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/pause-494622/proxy-client.key
	I1025 10:48:46.448201  420177 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:48:46.448256  420177 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:48:46.448281  420177 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:48:46.448338  420177 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:48:46.448398  420177 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:48:46.448463  420177 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:48:46.448536  420177 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:48:46.450884  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:48:46.470066  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:48:46.488986  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:48:46.507932  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:48:46.526803  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/pause-494622/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 10:48:46.546107  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/pause-494622/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 10:48:46.564119  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/pause-494622/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:48:46.582375  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/pause-494622/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:48:46.599965  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:48:46.618555  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:48:46.637238  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:48:46.655232  420177 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:48:44.248165  407575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:48:44.248606  407575 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 10:48:44.248653  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:48:44.248719  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:48:44.279379  407575 cri.go:89] found id: "9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:44.279403  407575 cri.go:89] found id: ""
	I1025 10:48:44.279411  407575 logs.go:282] 1 containers: [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f]
	I1025 10:48:44.279469  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:44.286082  407575 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:48:44.286152  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:48:44.319593  407575 cri.go:89] found id: ""
	I1025 10:48:44.319621  407575 logs.go:282] 0 containers: []
	W1025 10:48:44.319630  407575 logs.go:284] No container was found matching "etcd"
	I1025 10:48:44.319637  407575 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:48:44.319697  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:48:44.376453  407575 cri.go:89] found id: ""
	I1025 10:48:44.376481  407575 logs.go:282] 0 containers: []
	W1025 10:48:44.376489  407575 logs.go:284] No container was found matching "coredns"
	I1025 10:48:44.376496  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:48:44.376560  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:48:44.408932  407575 cri.go:89] found id: "ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:44.408958  407575 cri.go:89] found id: ""
	I1025 10:48:44.408967  407575 logs.go:282] 1 containers: [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015]
	I1025 10:48:44.409040  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:44.417934  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:48:44.418058  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:48:44.446859  407575 cri.go:89] found id: ""
	I1025 10:48:44.446885  407575 logs.go:282] 0 containers: []
	W1025 10:48:44.446893  407575 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:48:44.446900  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:48:44.447040  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:48:44.485246  407575 cri.go:89] found id: "1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:44.485274  407575 cri.go:89] found id: ""
	I1025 10:48:44.485283  407575 logs.go:282] 1 containers: [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044]
	I1025 10:48:44.485340  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:44.489328  407575 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:48:44.489419  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:48:44.534289  407575 cri.go:89] found id: ""
	I1025 10:48:44.534319  407575 logs.go:282] 0 containers: []
	W1025 10:48:44.534328  407575 logs.go:284] No container was found matching "kindnet"
	I1025 10:48:44.534335  407575 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:48:44.534401  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:48:44.569331  407575 cri.go:89] found id: ""
	I1025 10:48:44.569358  407575 logs.go:282] 0 containers: []
	W1025 10:48:44.569368  407575 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:48:44.569376  407575 logs.go:123] Gathering logs for kube-apiserver [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f] ...
	I1025 10:48:44.569388  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:44.607064  407575 logs.go:123] Gathering logs for kube-scheduler [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015] ...
	I1025 10:48:44.607101  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:44.687832  407575 logs.go:123] Gathering logs for kube-controller-manager [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044] ...
	I1025 10:48:44.687866  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:44.725633  407575 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:48:44.725714  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:48:44.800337  407575 logs.go:123] Gathering logs for container status ...
	I1025 10:48:44.800377  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:48:44.841185  407575 logs.go:123] Gathering logs for kubelet ...
	I1025 10:48:44.841254  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:48:44.983504  407575 logs.go:123] Gathering logs for dmesg ...
	I1025 10:48:44.983620  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:48:45.003479  407575 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:48:45.003763  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:48:45.124615  407575 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:48:46.668844  420177 ssh_runner.go:195] Run: openssl version
	I1025 10:48:46.675491  420177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:48:46.684385  420177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:48:46.693243  420177 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:48:46.693310  420177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:48:46.816123  420177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:48:46.839416  420177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:48:46.859369  420177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:48:46.867260  420177 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:48:46.867325  420177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:48:46.984667  420177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:48:47.005650  420177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:48:47.031087  420177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:48:47.043380  420177 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:48:47.043446  420177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:48:47.115799  420177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:48:47.124871  420177 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:48:47.134372  420177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:48:47.198891  420177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:48:47.266586  420177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:48:47.330936  420177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:48:47.391059  420177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:48:47.442423  420177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:48:47.503589  420177 kubeadm.go:400] StartCluster: {Name:pause-494622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-494622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:48:47.503710  420177 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:48:47.503788  420177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:48:47.545934  420177 cri.go:89] found id: "96c83536eb5174d663ace1cf0fadd0dacdd66f200c39a0bdd9cc3de480273b9a"
	I1025 10:48:47.545959  420177 cri.go:89] found id: "0411316bf38375595d1010d1a674f0717b0b515e8d8abbbff9e7ea89d8444814"
	I1025 10:48:47.545964  420177 cri.go:89] found id: "336fcce7f177ee63099c95f463857f65a6c8674b4cae330456af35a66d1e5927"
	I1025 10:48:47.545967  420177 cri.go:89] found id: "6a52960107a32d9d63c9a726cde40d6bc306416bd0198608ded2c7804daad2a9"
	I1025 10:48:47.545971  420177 cri.go:89] found id: "a2575b358a4844d89fec42a8040e731bf10578ac0841857c5d57c9f3d436492e"
	I1025 10:48:47.545974  420177 cri.go:89] found id: "9dd9f6a0583890e7ee49e45dee555a894f7bebf9e5043ef4e4d76611b6528f01"
	I1025 10:48:47.545978  420177 cri.go:89] found id: "0dacb3499bb498eb60afdc5550e70098c64ba1e92df1f33f6f5990e014b49766"
	I1025 10:48:47.546024  420177 cri.go:89] found id: "0a2ac9c53256707ef6dd02317248b4d542d804a6bc9fa4ffe7fcf73c2e0e74ba"
	I1025 10:48:47.546027  420177 cri.go:89] found id: "3e2ff0d6a6cab5c143e5ccbf87b8ae6a4d27061da41107c8817afeec41eb6940"
	I1025 10:48:47.546036  420177 cri.go:89] found id: "5a391da839348564e6c59f05bd1af2867b2ba66f17ea3ba8731f53c762dce341"
	I1025 10:48:47.546043  420177 cri.go:89] found id: "4f17ef8ba1aa56544d98deddadc6648233f1aa7f176fb6f9cb061a02e556af0f"
	I1025 10:48:47.546047  420177 cri.go:89] found id: "ee7dbc55c95114fc27b76a23f146b7b3cdf19a29f316645a7438a38ba79d5fca"
	I1025 10:48:47.546050  420177 cri.go:89] found id: "56698e4599135d8d0d3a8b15f20fb0fcbcf302ce721ba8c99956c5c54be1673d"
	I1025 10:48:47.546053  420177 cri.go:89] found id: ""
	I1025 10:48:47.546105  420177 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:48:47.558984  420177 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:48:47Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:48:47.559088  420177 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:48:47.570998  420177 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:48:47.571022  420177 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:48:47.571078  420177 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:48:47.590482  420177 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:48:47.591133  420177 kubeconfig.go:125] found "pause-494622" server: "https://192.168.85.2:8443"
	I1025 10:48:47.591937  420177 kapi.go:59] client config for pause-494622: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/pause-494622/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/pause-494622/client.key", CAFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 10:48:47.592420  420177 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1025 10:48:47.592436  420177 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1025 10:48:47.592442  420177 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 10:48:47.592451  420177 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1025 10:48:47.592456  420177 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 10:48:47.592823  420177 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:48:47.624037  420177 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1025 10:48:47.624073  420177 kubeadm.go:601] duration metric: took 53.044306ms to restartPrimaryControlPlane
	I1025 10:48:47.624083  420177 kubeadm.go:402] duration metric: took 120.504497ms to StartCluster
	I1025 10:48:47.624098  420177 settings.go:142] acquiring lock: {Name:mk78071aea90144fb2e8f63b90de4792d7a3522a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:48:47.624160  420177 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:48:47.625100  420177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:48:47.625323  420177 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:48:47.626449  420177 config.go:182] Loaded profile config "pause-494622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:48:47.626552  420177 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:48:47.633564  420177 out.go:179] * Verifying Kubernetes components...
	I1025 10:48:47.633681  420177 out.go:179] * Enabled addons: 
	I1025 10:48:47.637976  420177 addons.go:514] duration metric: took 11.420491ms for enable addons: enabled=[]
	I1025 10:48:47.638121  420177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:48:48.044329  420177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:48:48.082101  420177 node_ready.go:35] waiting up to 6m0s for node "pause-494622" to be "Ready" ...
	I1025 10:48:47.625433  407575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:48:47.625760  407575 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 10:48:47.625797  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:48:47.625849  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:48:47.680673  407575 cri.go:89] found id: "9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:47.680694  407575 cri.go:89] found id: ""
	I1025 10:48:47.680702  407575 logs.go:282] 1 containers: [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f]
	I1025 10:48:47.680763  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:47.684864  407575 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:48:47.684940  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:48:47.733200  407575 cri.go:89] found id: ""
	I1025 10:48:47.733225  407575 logs.go:282] 0 containers: []
	W1025 10:48:47.733233  407575 logs.go:284] No container was found matching "etcd"
	I1025 10:48:47.733243  407575 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:48:47.733299  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:48:47.776490  407575 cri.go:89] found id: ""
	I1025 10:48:47.776512  407575 logs.go:282] 0 containers: []
	W1025 10:48:47.776521  407575 logs.go:284] No container was found matching "coredns"
	I1025 10:48:47.776527  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:48:47.776586  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:48:47.824447  407575 cri.go:89] found id: "ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:47.824467  407575 cri.go:89] found id: ""
	I1025 10:48:47.824475  407575 logs.go:282] 1 containers: [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015]
	I1025 10:48:47.824531  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:47.828576  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:48:47.828647  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:48:47.879152  407575 cri.go:89] found id: ""
	I1025 10:48:47.879174  407575 logs.go:282] 0 containers: []
	W1025 10:48:47.879183  407575 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:48:47.879192  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:48:47.879251  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:48:47.925520  407575 cri.go:89] found id: "1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:47.925584  407575 cri.go:89] found id: ""
	I1025 10:48:47.925596  407575 logs.go:282] 1 containers: [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044]
	I1025 10:48:47.925653  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:47.929563  407575 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:48:47.929690  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:48:47.977060  407575 cri.go:89] found id: ""
	I1025 10:48:47.977136  407575 logs.go:282] 0 containers: []
	W1025 10:48:47.977161  407575 logs.go:284] No container was found matching "kindnet"
	I1025 10:48:47.977188  407575 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:48:47.977304  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:48:48.016817  407575 cri.go:89] found id: ""
	I1025 10:48:48.016896  407575 logs.go:282] 0 containers: []
	W1025 10:48:48.016921  407575 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:48:48.016948  407575 logs.go:123] Gathering logs for dmesg ...
	I1025 10:48:48.017031  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:48:48.052352  407575 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:48:48.052436  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:48:48.159870  407575 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:48:48.159892  407575 logs.go:123] Gathering logs for kube-apiserver [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f] ...
	I1025 10:48:48.159908  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:48.197106  407575 logs.go:123] Gathering logs for kube-scheduler [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015] ...
	I1025 10:48:48.197142  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:48.282024  407575 logs.go:123] Gathering logs for kube-controller-manager [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044] ...
	I1025 10:48:48.282067  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:48.319979  407575 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:48:48.320011  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:48:48.399770  407575 logs.go:123] Gathering logs for container status ...
	I1025 10:48:48.399808  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:48:48.494989  407575 logs.go:123] Gathering logs for kubelet ...
	I1025 10:48:48.495018  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:48:51.187924  407575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:48:51.188330  407575 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 10:48:51.188374  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:48:51.188437  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:48:51.271734  407575 cri.go:89] found id: "9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:51.271756  407575 cri.go:89] found id: ""
	I1025 10:48:51.271765  407575 logs.go:282] 1 containers: [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f]
	I1025 10:48:51.271824  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:51.278673  407575 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:48:51.278751  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:48:51.337197  407575 cri.go:89] found id: ""
	I1025 10:48:51.337226  407575 logs.go:282] 0 containers: []
	W1025 10:48:51.337235  407575 logs.go:284] No container was found matching "etcd"
	I1025 10:48:51.337241  407575 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:48:51.337298  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:48:51.375970  407575 cri.go:89] found id: ""
	I1025 10:48:51.375996  407575 logs.go:282] 0 containers: []
	W1025 10:48:51.376004  407575 logs.go:284] No container was found matching "coredns"
	I1025 10:48:51.376011  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:48:51.376065  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:48:51.426537  407575 cri.go:89] found id: "ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:51.426562  407575 cri.go:89] found id: ""
	I1025 10:48:51.426571  407575 logs.go:282] 1 containers: [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015]
	I1025 10:48:51.426627  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:51.432384  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:48:51.432459  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:48:51.480288  407575 cri.go:89] found id: ""
	I1025 10:48:51.480315  407575 logs.go:282] 0 containers: []
	W1025 10:48:51.480331  407575 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:48:51.480338  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:48:51.480397  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:48:51.524869  407575 cri.go:89] found id: "1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:51.524904  407575 cri.go:89] found id: ""
	I1025 10:48:51.524912  407575 logs.go:282] 1 containers: [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044]
	I1025 10:48:51.524976  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:51.533331  407575 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:48:51.533414  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:48:51.576821  407575 cri.go:89] found id: ""
	I1025 10:48:51.576857  407575 logs.go:282] 0 containers: []
	W1025 10:48:51.576866  407575 logs.go:284] No container was found matching "kindnet"
	I1025 10:48:51.576873  407575 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:48:51.576942  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:48:51.616988  407575 cri.go:89] found id: ""
	I1025 10:48:51.617024  407575 logs.go:282] 0 containers: []
	W1025 10:48:51.617033  407575 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:48:51.617043  407575 logs.go:123] Gathering logs for dmesg ...
	I1025 10:48:51.617055  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:48:51.640072  407575 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:48:51.640114  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:48:51.754735  407575 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:48:51.754761  407575 logs.go:123] Gathering logs for kube-apiserver [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f] ...
	I1025 10:48:51.754776  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:51.819978  407575 logs.go:123] Gathering logs for kube-scheduler [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015] ...
	I1025 10:48:51.820019  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:51.923222  407575 logs.go:123] Gathering logs for kube-controller-manager [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044] ...
	I1025 10:48:51.923259  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:51.982535  407575 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:48:51.982565  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:48:52.052047  407575 logs.go:123] Gathering logs for container status ...
	I1025 10:48:52.052089  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:48:52.107382  407575 logs.go:123] Gathering logs for kubelet ...
	I1025 10:48:52.107416  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:48:52.377280  420177 node_ready.go:49] node "pause-494622" is "Ready"
	I1025 10:48:52.377311  420177 node_ready.go:38] duration metric: took 4.295111717s for node "pause-494622" to be "Ready" ...
	I1025 10:48:52.377326  420177 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:48:52.377384  420177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:48:52.394632  420177 api_server.go:72] duration metric: took 4.76926334s to wait for apiserver process to appear ...
	I1025 10:48:52.394654  420177 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:48:52.394674  420177 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 10:48:52.416402  420177 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 10:48:52.416427  420177 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 10:48:52.894778  420177 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 10:48:52.907901  420177 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:48:52.907927  420177 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:48:53.395525  420177 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 10:48:53.403643  420177 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 10:48:53.405026  420177 api_server.go:141] control plane version: v1.34.1
	I1025 10:48:53.405058  420177 api_server.go:131] duration metric: took 1.010396295s to wait for apiserver health ...
	I1025 10:48:53.405068  420177 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:48:53.410387  420177 system_pods.go:59] 7 kube-system pods found
	I1025 10:48:53.410432  420177 system_pods.go:61] "coredns-66bc5c9577-hxv7f" [4ede21c9-566e-4bba-881f-5aa690ed4934] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:48:53.410442  420177 system_pods.go:61] "etcd-pause-494622" [c254a2ab-dcbd-4d7b-838c-7a91485f45fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:48:53.410474  420177 system_pods.go:61] "kindnet-zprkn" [5aa10493-5fd8-4bf9-b4d7-5ca08b07f0aa] Running
	I1025 10:48:53.410496  420177 system_pods.go:61] "kube-apiserver-pause-494622" [ed4419cb-f4c5-497b-a154-4e254454f220] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:48:53.410518  420177 system_pods.go:61] "kube-controller-manager-pause-494622" [d5fa5bd3-5558-4b1c-8c16-cd3f4979d38b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:48:53.410523  420177 system_pods.go:61] "kube-proxy-tmr4x" [b0951588-0d5e-4c4d-a26e-32fe980890b4] Running
	I1025 10:48:53.410530  420177 system_pods.go:61] "kube-scheduler-pause-494622" [4547db9f-4029-4148-8737-db0dfb5f30b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:48:53.410539  420177 system_pods.go:74] duration metric: took 5.465201ms to wait for pod list to return data ...
	I1025 10:48:53.410548  420177 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:48:53.416865  420177 default_sa.go:45] found service account: "default"
	I1025 10:48:53.416902  420177 default_sa.go:55] duration metric: took 6.346654ms for default service account to be created ...
	I1025 10:48:53.416913  420177 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:48:53.423405  420177 system_pods.go:86] 7 kube-system pods found
	I1025 10:48:53.423441  420177 system_pods.go:89] "coredns-66bc5c9577-hxv7f" [4ede21c9-566e-4bba-881f-5aa690ed4934] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:48:53.423451  420177 system_pods.go:89] "etcd-pause-494622" [c254a2ab-dcbd-4d7b-838c-7a91485f45fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:48:53.423479  420177 system_pods.go:89] "kindnet-zprkn" [5aa10493-5fd8-4bf9-b4d7-5ca08b07f0aa] Running
	I1025 10:48:53.423491  420177 system_pods.go:89] "kube-apiserver-pause-494622" [ed4419cb-f4c5-497b-a154-4e254454f220] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:48:53.423499  420177 system_pods.go:89] "kube-controller-manager-pause-494622" [d5fa5bd3-5558-4b1c-8c16-cd3f4979d38b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:48:53.423507  420177 system_pods.go:89] "kube-proxy-tmr4x" [b0951588-0d5e-4c4d-a26e-32fe980890b4] Running
	I1025 10:48:53.423532  420177 system_pods.go:89] "kube-scheduler-pause-494622" [4547db9f-4029-4148-8737-db0dfb5f30b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:48:53.423557  420177 system_pods.go:126] duration metric: took 6.636953ms to wait for k8s-apps to be running ...
	I1025 10:48:53.423572  420177 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:48:53.423642  420177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:48:53.437355  420177 system_svc.go:56] duration metric: took 13.774023ms WaitForService to wait for kubelet
	I1025 10:48:53.437432  420177 kubeadm.go:586] duration metric: took 5.812075649s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:48:53.437471  420177 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:48:53.440982  420177 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:48:53.441074  420177 node_conditions.go:123] node cpu capacity is 2
	I1025 10:48:53.441103  420177 node_conditions.go:105] duration metric: took 3.612441ms to run NodePressure ...
	I1025 10:48:53.441123  420177 start.go:241] waiting for startup goroutines ...
	I1025 10:48:53.441132  420177 start.go:246] waiting for cluster config update ...
	I1025 10:48:53.441151  420177 start.go:255] writing updated cluster config ...
	I1025 10:48:53.441481  420177 ssh_runner.go:195] Run: rm -f paused
	I1025 10:48:53.445506  420177 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:48:53.446199  420177 kapi.go:59] client config for pause-494622: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/pause-494622/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/pause-494622/client.key", CAFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 10:48:53.449667  420177 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hxv7f" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:48:55.470600  420177 pod_ready.go:104] pod "coredns-66bc5c9577-hxv7f" is not "Ready", error: <nil>
	I1025 10:48:54.775143  407575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:48:54.775612  407575 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 10:48:54.775663  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:48:54.775724  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:48:54.817375  407575 cri.go:89] found id: "9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:54.817402  407575 cri.go:89] found id: ""
	I1025 10:48:54.817411  407575 logs.go:282] 1 containers: [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f]
	I1025 10:48:54.817466  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:54.821259  407575 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:48:54.821343  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:48:54.853775  407575 cri.go:89] found id: ""
	I1025 10:48:54.853802  407575 logs.go:282] 0 containers: []
	W1025 10:48:54.853811  407575 logs.go:284] No container was found matching "etcd"
	I1025 10:48:54.853818  407575 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:48:54.853877  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:48:54.886067  407575 cri.go:89] found id: ""
	I1025 10:48:54.886094  407575 logs.go:282] 0 containers: []
	W1025 10:48:54.886103  407575 logs.go:284] No container was found matching "coredns"
	I1025 10:48:54.886109  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:48:54.886169  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:48:54.914393  407575 cri.go:89] found id: "ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:54.914415  407575 cri.go:89] found id: ""
	I1025 10:48:54.914423  407575 logs.go:282] 1 containers: [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015]
	I1025 10:48:54.914481  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:54.918282  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:48:54.918373  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:48:54.946544  407575 cri.go:89] found id: ""
	I1025 10:48:54.946570  407575 logs.go:282] 0 containers: []
	W1025 10:48:54.946579  407575 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:48:54.946587  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:48:54.946650  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:48:54.989737  407575 cri.go:89] found id: "1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:54.989760  407575 cri.go:89] found id: ""
	I1025 10:48:54.989768  407575 logs.go:282] 1 containers: [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044]
	I1025 10:48:54.989829  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:54.993593  407575 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:48:54.993695  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:48:55.028699  407575 cri.go:89] found id: ""
	I1025 10:48:55.028730  407575 logs.go:282] 0 containers: []
	W1025 10:48:55.028738  407575 logs.go:284] No container was found matching "kindnet"
	I1025 10:48:55.028745  407575 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:48:55.028810  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:48:55.056132  407575 cri.go:89] found id: ""
	I1025 10:48:55.056159  407575 logs.go:282] 0 containers: []
	W1025 10:48:55.056167  407575 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:48:55.056178  407575 logs.go:123] Gathering logs for container status ...
	I1025 10:48:55.056189  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:48:55.089964  407575 logs.go:123] Gathering logs for kubelet ...
	I1025 10:48:55.090074  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:48:55.213937  407575 logs.go:123] Gathering logs for dmesg ...
	I1025 10:48:55.213974  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:48:55.243098  407575 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:48:55.243138  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:48:55.312763  407575 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:48:55.312786  407575 logs.go:123] Gathering logs for kube-apiserver [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f] ...
	I1025 10:48:55.312799  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:55.345729  407575 logs.go:123] Gathering logs for kube-scheduler [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015] ...
	I1025 10:48:55.345763  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:55.407751  407575 logs.go:123] Gathering logs for kube-controller-manager [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044] ...
	I1025 10:48:55.407786  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:55.435382  407575 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:48:55.435413  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1025 10:48:57.955184  420177 pod_ready.go:104] pod "coredns-66bc5c9577-hxv7f" is not "Ready", error: <nil>
	I1025 10:48:59.455843  420177 pod_ready.go:94] pod "coredns-66bc5c9577-hxv7f" is "Ready"
	I1025 10:48:59.455883  420177 pod_ready.go:86] duration metric: took 6.006189531s for pod "coredns-66bc5c9577-hxv7f" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:48:59.458761  420177 pod_ready.go:83] waiting for pod "etcd-pause-494622" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:49:01.466458  420177 pod_ready.go:104] pod "etcd-pause-494622" is not "Ready", error: <nil>
	I1025 10:48:57.995639  407575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:48:57.996079  407575 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 10:48:57.996127  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:48:57.996186  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:48:58.026719  407575 cri.go:89] found id: "9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:58.026744  407575 cri.go:89] found id: ""
	I1025 10:48:58.026754  407575 logs.go:282] 1 containers: [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f]
	I1025 10:48:58.026816  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:58.030693  407575 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:48:58.030770  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:48:58.063753  407575 cri.go:89] found id: ""
	I1025 10:48:58.063778  407575 logs.go:282] 0 containers: []
	W1025 10:48:58.063787  407575 logs.go:284] No container was found matching "etcd"
	I1025 10:48:58.063794  407575 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:48:58.063854  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:48:58.091620  407575 cri.go:89] found id: ""
	I1025 10:48:58.091699  407575 logs.go:282] 0 containers: []
	W1025 10:48:58.091715  407575 logs.go:284] No container was found matching "coredns"
	I1025 10:48:58.091723  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:48:58.091797  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:48:58.119102  407575 cri.go:89] found id: "ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:58.119126  407575 cri.go:89] found id: ""
	I1025 10:48:58.119134  407575 logs.go:282] 1 containers: [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015]
	I1025 10:48:58.119193  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:58.122974  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:48:58.123056  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:48:58.150655  407575 cri.go:89] found id: ""
	I1025 10:48:58.150681  407575 logs.go:282] 0 containers: []
	W1025 10:48:58.150690  407575 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:48:58.150698  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:48:58.150759  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:48:58.179348  407575 cri.go:89] found id: "1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:58.179372  407575 cri.go:89] found id: ""
	I1025 10:48:58.179380  407575 logs.go:282] 1 containers: [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044]
	I1025 10:48:58.179444  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:58.183368  407575 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:48:58.183446  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:48:58.210274  407575 cri.go:89] found id: ""
	I1025 10:48:58.210302  407575 logs.go:282] 0 containers: []
	W1025 10:48:58.210313  407575 logs.go:284] No container was found matching "kindnet"
	I1025 10:48:58.210321  407575 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:48:58.210383  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:48:58.238525  407575 cri.go:89] found id: ""
	I1025 10:48:58.238547  407575 logs.go:282] 0 containers: []
	W1025 10:48:58.238556  407575 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:48:58.238565  407575 logs.go:123] Gathering logs for dmesg ...
	I1025 10:48:58.238577  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:48:58.257827  407575 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:48:58.257933  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:48:58.324852  407575 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:48:58.324871  407575 logs.go:123] Gathering logs for kube-apiserver [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f] ...
	I1025 10:48:58.324883  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:58.359826  407575 logs.go:123] Gathering logs for kube-scheduler [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015] ...
	I1025 10:48:58.359863  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:58.420783  407575 logs.go:123] Gathering logs for kube-controller-manager [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044] ...
	I1025 10:48:58.420820  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:58.450071  407575 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:48:58.450101  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:48:58.507255  407575 logs.go:123] Gathering logs for container status ...
	I1025 10:48:58.507300  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:48:58.538953  407575 logs.go:123] Gathering logs for kubelet ...
	I1025 10:48:58.538983  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:49:01.157245  407575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:49:01.157745  407575 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 10:49:01.157808  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:49:01.157883  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:49:01.185737  407575 cri.go:89] found id: "9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:49:01.185759  407575 cri.go:89] found id: ""
	I1025 10:49:01.185768  407575 logs.go:282] 1 containers: [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f]
	I1025 10:49:01.185824  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:49:01.189571  407575 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:49:01.189667  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:49:01.218681  407575 cri.go:89] found id: ""
	I1025 10:49:01.218714  407575 logs.go:282] 0 containers: []
	W1025 10:49:01.218723  407575 logs.go:284] No container was found matching "etcd"
	I1025 10:49:01.218730  407575 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:49:01.218792  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:49:01.251461  407575 cri.go:89] found id: ""
	I1025 10:49:01.251486  407575 logs.go:282] 0 containers: []
	W1025 10:49:01.251494  407575 logs.go:284] No container was found matching "coredns"
	I1025 10:49:01.251501  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:49:01.251561  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:49:01.278770  407575 cri.go:89] found id: "ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:49:01.278793  407575 cri.go:89] found id: ""
	I1025 10:49:01.278801  407575 logs.go:282] 1 containers: [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015]
	I1025 10:49:01.278860  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:49:01.282540  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:49:01.282626  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:49:01.313801  407575 cri.go:89] found id: ""
	I1025 10:49:01.313825  407575 logs.go:282] 0 containers: []
	W1025 10:49:01.313834  407575 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:49:01.313841  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:49:01.313905  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:49:01.340546  407575 cri.go:89] found id: "1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:49:01.340618  407575 cri.go:89] found id: ""
	I1025 10:49:01.340658  407575 logs.go:282] 1 containers: [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044]
	I1025 10:49:01.340760  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:49:01.344428  407575 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:49:01.344566  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:49:01.370669  407575 cri.go:89] found id: ""
	I1025 10:49:01.370695  407575 logs.go:282] 0 containers: []
	W1025 10:49:01.370705  407575 logs.go:284] No container was found matching "kindnet"
	I1025 10:49:01.370711  407575 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:49:01.370771  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:49:01.397940  407575 cri.go:89] found id: ""
	I1025 10:49:01.397965  407575 logs.go:282] 0 containers: []
	W1025 10:49:01.397974  407575 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:49:01.398006  407575 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:49:01.398021  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:49:01.455651  407575 logs.go:123] Gathering logs for container status ...
	I1025 10:49:01.455692  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:49:01.492167  407575 logs.go:123] Gathering logs for kubelet ...
	I1025 10:49:01.492194  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:49:01.609165  407575 logs.go:123] Gathering logs for dmesg ...
	I1025 10:49:01.609205  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:49:01.627336  407575 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:49:01.627369  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:49:01.697962  407575 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:49:01.698013  407575 logs.go:123] Gathering logs for kube-apiserver [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f] ...
	I1025 10:49:01.698045  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:49:01.738767  407575 logs.go:123] Gathering logs for kube-scheduler [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015] ...
	I1025 10:49:01.738843  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:49:01.806042  407575 logs.go:123] Gathering logs for kube-controller-manager [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044] ...
	I1025 10:49:01.806080  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	W1025 10:49:03.964482  420177 pod_ready.go:104] pod "etcd-pause-494622" is not "Ready", error: <nil>
	I1025 10:49:06.466193  420177 pod_ready.go:94] pod "etcd-pause-494622" is "Ready"
	I1025 10:49:06.466231  420177 pod_ready.go:86] duration metric: took 7.007442456s for pod "etcd-pause-494622" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:49:06.469267  420177 pod_ready.go:83] waiting for pod "kube-apiserver-pause-494622" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:49:06.475692  420177 pod_ready.go:94] pod "kube-apiserver-pause-494622" is "Ready"
	I1025 10:49:06.475770  420177 pod_ready.go:86] duration metric: took 6.469395ms for pod "kube-apiserver-pause-494622" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:49:06.479235  420177 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-494622" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:49:06.485486  420177 pod_ready.go:94] pod "kube-controller-manager-pause-494622" is "Ready"
	I1025 10:49:06.485561  420177 pod_ready.go:86] duration metric: took 6.252343ms for pod "kube-controller-manager-pause-494622" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:49:06.488545  420177 pod_ready.go:83] waiting for pod "kube-proxy-tmr4x" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:49:06.664240  420177 pod_ready.go:94] pod "kube-proxy-tmr4x" is "Ready"
	I1025 10:49:06.664345  420177 pod_ready.go:86] duration metric: took 175.726386ms for pod "kube-proxy-tmr4x" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:49:06.867156  420177 pod_ready.go:83] waiting for pod "kube-scheduler-pause-494622" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:49:07.265197  420177 pod_ready.go:94] pod "kube-scheduler-pause-494622" is "Ready"
	I1025 10:49:07.265295  420177 pod_ready.go:86] duration metric: took 398.011707ms for pod "kube-scheduler-pause-494622" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:49:07.265336  420177 pod_ready.go:40] duration metric: took 13.819796339s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:49:07.347987  420177 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:49:07.351055  420177 out.go:179] * Done! kubectl is now configured to use "pause-494622" cluster and "default" namespace by default
	I1025 10:49:04.332586  407575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:49:04.333021  407575 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 10:49:04.333065  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:49:04.333131  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:49:04.364849  407575 cri.go:89] found id: "9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:49:04.364874  407575 cri.go:89] found id: ""
	I1025 10:49:04.364886  407575 logs.go:282] 1 containers: [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f]
	I1025 10:49:04.364954  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:49:04.368735  407575 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:49:04.368807  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:49:04.394745  407575 cri.go:89] found id: ""
	I1025 10:49:04.394778  407575 logs.go:282] 0 containers: []
	W1025 10:49:04.394789  407575 logs.go:284] No container was found matching "etcd"
	I1025 10:49:04.394796  407575 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:49:04.394857  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:49:04.421884  407575 cri.go:89] found id: ""
	I1025 10:49:04.421908  407575 logs.go:282] 0 containers: []
	W1025 10:49:04.421917  407575 logs.go:284] No container was found matching "coredns"
	I1025 10:49:04.421923  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:49:04.422059  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:49:04.451180  407575 cri.go:89] found id: "ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:49:04.451203  407575 cri.go:89] found id: ""
	I1025 10:49:04.451212  407575 logs.go:282] 1 containers: [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015]
	I1025 10:49:04.451277  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:49:04.455100  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:49:04.455196  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:49:04.485451  407575 cri.go:89] found id: ""
	I1025 10:49:04.485486  407575 logs.go:282] 0 containers: []
	W1025 10:49:04.485495  407575 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:49:04.485502  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:49:04.485572  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:49:04.512939  407575 cri.go:89] found id: "1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:49:04.512963  407575 cri.go:89] found id: ""
	I1025 10:49:04.512971  407575 logs.go:282] 1 containers: [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044]
	I1025 10:49:04.513035  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:49:04.517803  407575 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:49:04.517875  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:49:04.543990  407575 cri.go:89] found id: ""
	I1025 10:49:04.544017  407575 logs.go:282] 0 containers: []
	W1025 10:49:04.544026  407575 logs.go:284] No container was found matching "kindnet"
	I1025 10:49:04.544032  407575 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:49:04.544141  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:49:04.580312  407575 cri.go:89] found id: ""
	I1025 10:49:04.580378  407575 logs.go:282] 0 containers: []
	W1025 10:49:04.580401  407575 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:49:04.580430  407575 logs.go:123] Gathering logs for dmesg ...
	I1025 10:49:04.580469  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:49:04.601747  407575 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:49:04.601833  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:49:04.673642  407575 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:49:04.673665  407575 logs.go:123] Gathering logs for kube-apiserver [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f] ...
	I1025 10:49:04.673678  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:49:04.716162  407575 logs.go:123] Gathering logs for kube-scheduler [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015] ...
	I1025 10:49:04.716198  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:49:04.773950  407575 logs.go:123] Gathering logs for kube-controller-manager [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044] ...
	I1025 10:49:04.774003  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:49:04.804150  407575 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:49:04.804178  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:49:04.863060  407575 logs.go:123] Gathering logs for container status ...
	I1025 10:49:04.863095  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:49:04.896130  407575 logs.go:123] Gathering logs for kubelet ...
	I1025 10:49:04.896160  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	
	
	==> CRI-O <==
	Oct 25 10:48:46 pause-494622 crio[2058]: time="2025-10-25T10:48:46.944535248Z" level=info msg="Starting container: 0411316bf38375595d1010d1a674f0717b0b515e8d8abbbff9e7ea89d8444814" id=87e989b6-a270-48cb-a958-c072d9f0c6e7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:48:46 pause-494622 crio[2058]: time="2025-10-25T10:48:46.969703128Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:48:46 pause-494622 crio[2058]: time="2025-10-25T10:48:46.970650099Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:48:46 pause-494622 crio[2058]: time="2025-10-25T10:48:46.984018539Z" level=info msg="Started container" PID=2302 containerID=0411316bf38375595d1010d1a674f0717b0b515e8d8abbbff9e7ea89d8444814 description=kube-system/etcd-pause-494622/etcd id=87e989b6-a270-48cb-a958-c072d9f0c6e7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c497ffe2999dc56acac9591d86db75448418124a24ba02bf69912bd8cc462ff0
	Oct 25 10:48:47 pause-494622 crio[2058]: time="2025-10-25T10:48:47.031492635Z" level=info msg="Created container 96c83536eb5174d663ace1cf0fadd0dacdd66f200c39a0bdd9cc3de480273b9a: kube-system/kindnet-zprkn/kindnet-cni" id=361aa80f-dea3-4e2d-9315-b0971a293f74 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:48:47 pause-494622 crio[2058]: time="2025-10-25T10:48:47.035430781Z" level=info msg="Starting container: 96c83536eb5174d663ace1cf0fadd0dacdd66f200c39a0bdd9cc3de480273b9a" id=1c10d891-fe5e-4561-b881-cf06573cdb89 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:48:47 pause-494622 crio[2058]: time="2025-10-25T10:48:47.037225736Z" level=info msg="Started container" PID=2322 containerID=96c83536eb5174d663ace1cf0fadd0dacdd66f200c39a0bdd9cc3de480273b9a description=kube-system/kindnet-zprkn/kindnet-cni id=1c10d891-fe5e-4561-b881-cf06573cdb89 name=/runtime.v1.RuntimeService/StartContainer sandboxID=55a4dd7e7fa8a8e138de97901e9220c6c675e7a6839b9f8756bc73266af3663d
	Oct 25 10:48:47 pause-494622 crio[2058]: time="2025-10-25T10:48:47.608650609Z" level=info msg="Created container b30e317be3539ac55cad517b2c4bbfd1d83ba79be6b363594a6726c56ecba536: kube-system/kube-proxy-tmr4x/kube-proxy" id=d144e588-c6c2-4c07-9f42-5e207442f65f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:48:47 pause-494622 crio[2058]: time="2025-10-25T10:48:47.61236477Z" level=info msg="Starting container: b30e317be3539ac55cad517b2c4bbfd1d83ba79be6b363594a6726c56ecba536" id=e9c4b014-b0ae-4a74-8f19-2d161417d1ba name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:48:47 pause-494622 crio[2058]: time="2025-10-25T10:48:47.616760792Z" level=info msg="Started container" PID=2332 containerID=b30e317be3539ac55cad517b2c4bbfd1d83ba79be6b363594a6726c56ecba536 description=kube-system/kube-proxy-tmr4x/kube-proxy id=e9c4b014-b0ae-4a74-8f19-2d161417d1ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=8b5263aacb14681c4f70dd0233cb26b387ce1775c639c83cfe40ea8df4687304
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.463967774Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.468325133Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.468366897Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.468389962Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.472793475Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.472835486Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.472863991Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.480489664Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.480539781Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.480567252Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.490050912Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.490222442Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.490301318Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.496288157Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.496329273Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	b30e317be3539       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   23 seconds ago       Running             kube-proxy                1                   8b5263aacb146       kube-proxy-tmr4x                       kube-system
	96c83536eb517       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   24 seconds ago       Running             kindnet-cni               1                   55a4dd7e7fa8a       kindnet-zprkn                          kube-system
	0411316bf3837       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   24 seconds ago       Running             etcd                      1                   c497ffe2999dc       etcd-pause-494622                      kube-system
	336fcce7f177e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   24 seconds ago       Running             kube-scheduler            1                   f33ce0d6881be       kube-scheduler-pause-494622            kube-system
	6a52960107a32       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   24 seconds ago       Running             kube-apiserver            1                   3b7e79d0e549e       kube-apiserver-pause-494622            kube-system
	a2575b358a484       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   24 seconds ago       Running             kube-controller-manager   1                   8d5a6c5509aaa       kube-controller-manager-pause-494622   kube-system
	9dd9f6a058389       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   24 seconds ago       Running             coredns                   1                   71ee8a1159ac4       coredns-66bc5c9577-hxv7f               kube-system
	0dacb3499bb49       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   36 seconds ago       Exited              coredns                   0                   71ee8a1159ac4       coredns-66bc5c9577-hxv7f               kube-system
	0a2ac9c532567       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   55a4dd7e7fa8a       kindnet-zprkn                          kube-system
	3e2ff0d6a6cab       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   8b5263aacb146       kube-proxy-tmr4x                       kube-system
	5a391da839348       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   3b7e79d0e549e       kube-apiserver-pause-494622            kube-system
	4f17ef8ba1aa5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   c497ffe2999dc       etcd-pause-494622                      kube-system
	ee7dbc55c9511       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   8d5a6c5509aaa       kube-controller-manager-pause-494622   kube-system
	56698e4599135       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   f33ce0d6881be       kube-scheduler-pause-494622            kube-system
	
	
	==> coredns [0dacb3499bb498eb60afdc5550e70098c64ba1e92df1f33f6f5990e014b49766] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36432 - 60643 "HINFO IN 4746436151050476060.9097902981941713514. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.04248122s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9dd9f6a0583890e7ee49e45dee555a894f7bebf9e5043ef4e4d76611b6528f01] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50987 - 28941 "HINFO IN 7474770809676891525.1809044207862229286. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032983536s
	
	
	==> describe nodes <==
	Name:               pause-494622
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-494622
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=pause-494622
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_47_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:47:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-494622
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:49:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:48:34 +0000   Sat, 25 Oct 2025 10:47:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:48:34 +0000   Sat, 25 Oct 2025 10:47:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:48:34 +0000   Sat, 25 Oct 2025 10:47:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:48:34 +0000   Sat, 25 Oct 2025 10:48:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-494622
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                f2b31047-a0f3-404e-9b65-adb974dd9b26
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-hxv7f                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     79s
	  kube-system                 etcd-pause-494622                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         84s
	  kube-system                 kindnet-zprkn                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      79s
	  kube-system                 kube-apiserver-pause-494622             250m (12%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-controller-manager-pause-494622    200m (10%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-proxy-tmr4x                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-scheduler-pause-494622             100m (5%)     0 (0%)      0 (0%)           0 (0%)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 77s                kube-proxy       
	  Normal   Starting                 18s                kube-proxy       
	  Normal   NodeHasSufficientPID     93s (x8 over 93s)  kubelet          Node pause-494622 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 93s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  93s (x8 over 93s)  kubelet          Node pause-494622 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    93s (x8 over 93s)  kubelet          Node pause-494622 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 93s                kubelet          Starting kubelet.
	  Normal   Starting                 84s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 84s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  84s                kubelet          Node pause-494622 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    84s                kubelet          Node pause-494622 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     84s                kubelet          Node pause-494622 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           80s                node-controller  Node pause-494622 event: Registered Node pause-494622 in Controller
	  Normal   NodeReady                37s                kubelet          Node pause-494622 status is now: NodeReady
	  Normal   RegisteredNode           17s                node-controller  Node pause-494622 event: Registered Node pause-494622 in Controller
	
	
	==> dmesg <==
	[  +4.737500] overlayfs: idmapped layers are currently not supported
	[Oct25 10:14] overlayfs: idmapped layers are currently not supported
	[Oct25 10:22] overlayfs: idmapped layers are currently not supported
	[  +3.234784] overlayfs: idmapped layers are currently not supported
	[Oct25 10:23] overlayfs: idmapped layers are currently not supported
	[Oct25 10:24] overlayfs: idmapped layers are currently not supported
	[Oct25 10:25] overlayfs: idmapped layers are currently not supported
	[Oct25 10:30] overlayfs: idmapped layers are currently not supported
	[Oct25 10:31] overlayfs: idmapped layers are currently not supported
	[Oct25 10:32] overlayfs: idmapped layers are currently not supported
	[Oct25 10:33] overlayfs: idmapped layers are currently not supported
	[Oct25 10:34] overlayfs: idmapped layers are currently not supported
	[Oct25 10:36] overlayfs: idmapped layers are currently not supported
	[ +23.146409] overlayfs: idmapped layers are currently not supported
	[ +12.090113] overlayfs: idmapped layers are currently not supported
	[Oct25 10:37] overlayfs: idmapped layers are currently not supported
	[ +24.992751] overlayfs: idmapped layers are currently not supported
	[Oct25 10:38] overlayfs: idmapped layers are currently not supported
	[Oct25 10:39] overlayfs: idmapped layers are currently not supported
	[Oct25 10:40] overlayfs: idmapped layers are currently not supported
	[Oct25 10:41] overlayfs: idmapped layers are currently not supported
	[Oct25 10:43] overlayfs: idmapped layers are currently not supported
	[Oct25 10:45] overlayfs: idmapped layers are currently not supported
	[ +32.163680] overlayfs: idmapped layers are currently not supported
	[Oct25 10:47] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0411316bf38375595d1010d1a674f0717b0b515e8d8abbbff9e7ea89d8444814] <==
	{"level":"warn","ts":"2025-10-25T10:48:50.452917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.496203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.503168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.522940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.535063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.550433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.571824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.584145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.601562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.636153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.650208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.672089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.690679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.716063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.731338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.744644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.764978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.782298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.797500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.818055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.839032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.872210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.886737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.909774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.984139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56424","server-name":"","error":"EOF"}
	
	
	==> etcd [4f17ef8ba1aa56544d98deddadc6648233f1aa7f176fb6f9cb061a02e556af0f] <==
	{"level":"warn","ts":"2025-10-25T10:47:43.299447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:47:43.336996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:47:43.394329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:47:43.419146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:47:43.442542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:47:43.473480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:47:43.600626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50098","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T10:48:38.838565Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-25T10:48:38.838622Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-494622","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-25T10:48:38.838769Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T10:48:38.984503Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T10:48:38.984596Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T10:48:38.984641Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-10-25T10:48:38.984671Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-25T10:48:38.984757Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-25T10:48:38.984809Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T10:48:38.984831Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-25T10:48:38.984841Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-25T10:48:38.984760Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T10:48:38.984854Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-25T10:48:38.984860Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T10:48:38.987995Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-25T10:48:38.988070Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T10:48:38.988107Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-25T10:48:38.988125Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-494622","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 10:49:11 up  2:31,  0 user,  load average: 1.74, 2.57, 2.31
	Linux pause-494622 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0a2ac9c53256707ef6dd02317248b4d542d804a6bc9fa4ffe7fcf73c2e0e74ba] <==
	I1025 10:47:53.320081       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:47:53.320424       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:47:53.320572       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:47:53.320613       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:47:53.320653       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:47:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:47:53.522859       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:47:53.522888       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:47:53.522896       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:47:53.523278       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:48:23.522776       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 10:48:23.522895       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 10:48:23.523890       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 10:48:23.523940       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1025 10:48:24.723290       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:48:24.723326       1 metrics.go:72] Registering metrics
	I1025 10:48:24.723389       1 controller.go:711] "Syncing nftables rules"
	I1025 10:48:33.530086       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:48:33.530144       1 main.go:301] handling current node
	
	
	==> kindnet [96c83536eb5174d663ace1cf0fadd0dacdd66f200c39a0bdd9cc3de480273b9a] <==
	I1025 10:48:47.186968       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:48:47.187466       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:48:47.187696       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:48:47.187736       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:48:47.187788       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:48:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:48:47.466058       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:48:47.466159       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:48:47.466200       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:48:47.470977       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 10:48:52.471051       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:48:52.471215       1 metrics.go:72] Registering metrics
	I1025 10:48:52.471373       1 controller.go:711] "Syncing nftables rules"
	I1025 10:48:57.463551       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:48:57.463678       1 main.go:301] handling current node
	I1025 10:49:07.463026       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:49:07.463105       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5a391da839348564e6c59f05bd1af2867b2ba66f17ea3ba8731f53c762dce341] <==
	W1025 10:48:38.860179       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.860226       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.860273       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.860321       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.860443       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.860572       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.860626       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.860670       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.860723       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.860763       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.860811       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.860857       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.860896       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.861438       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.861531       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.861598       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.861638       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.861683       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.861689       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.861736       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.861781       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.861813       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.861837       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.861859       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.861889       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [6a52960107a32d9d63c9a726cde40d6bc306416bd0198608ded2c7804daad2a9] <==
	I1025 10:48:52.434678       1 policy_source.go:240] refreshing policies
	I1025 10:48:52.435124       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 10:48:52.446949       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:48:52.449666       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:48:52.497677       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 10:48:52.497708       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 10:48:52.497823       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:48:52.503577       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 10:48:52.503678       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1025 10:48:52.504529       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 10:48:52.504689       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 10:48:52.504918       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 10:48:52.505234       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 10:48:52.505283       1 aggregator.go:171] initial CRD sync complete...
	I1025 10:48:52.505292       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 10:48:52.505297       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:48:52.505302       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:48:52.505426       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1025 10:48:52.536993       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 10:48:52.913699       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:48:53.462318       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:48:54.970899       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:48:55.068719       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:48:55.168885       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:48:55.224824       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [a2575b358a4844d89fec42a8040e731bf10578ac0841857c5d57c9f3d436492e] <==
	I1025 10:48:54.846793       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 10:48:54.853118       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 10:48:54.853242       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 10:48:54.853308       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 10:48:54.853365       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 10:48:54.853394       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 10:48:54.857138       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:48:54.857161       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:48:54.857168       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:48:54.862114       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:48:54.862179       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:48:54.862217       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:48:54.862242       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 10:48:54.862972       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 10:48:54.863061       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:48:54.863103       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 10:48:54.865652       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 10:48:54.870236       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 10:48:54.870342       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 10:48:54.870370       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 10:48:54.870419       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 10:48:54.870459       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 10:48:54.871564       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:48:54.877251       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 10:48:54.883332       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	
	
	==> kube-controller-manager [ee7dbc55c95114fc27b76a23f146b7b3cdf19a29f316645a7438a38ba79d5fca] <==
	I1025 10:47:51.502646       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:47:51.503666       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 10:47:51.503747       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 10:47:51.503804       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 10:47:51.503872       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 10:47:51.503948       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-494622"
	I1025 10:47:51.503991       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 10:47:51.504027       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 10:47:51.504054       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 10:47:51.504278       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 10:47:51.504793       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 10:47:51.504932       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 10:47:51.506843       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 10:47:51.507426       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:47:51.509070       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:47:51.509518       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:47:51.509564       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 10:47:51.509686       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 10:47:51.509848       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:47:51.509876       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 10:47:51.511691       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 10:47:51.511777       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 10:47:51.514423       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 10:47:51.525520       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 10:48:36.513718       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3e2ff0d6a6cab5c143e5ccbf87b8ae6a4d27061da41107c8817afeec41eb6940] <==
	I1025 10:47:53.312175       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:47:53.399661       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:47:53.500632       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:47:53.500675       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 10:47:53.500747       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:47:53.520739       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:47:53.520863       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:47:53.614768       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:47:53.615139       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:47:53.615387       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:47:53.619274       1 config.go:200] "Starting service config controller"
	I1025 10:47:53.619382       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:47:53.619706       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:47:53.619713       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:47:53.619740       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:47:53.619745       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:47:53.624774       1 config.go:309] "Starting node config controller"
	I1025 10:47:53.624858       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:47:53.624867       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:47:53.720754       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:47:53.720755       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:47:53.720860       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [b30e317be3539ac55cad517b2c4bbfd1d83ba79be6b363594a6726c56ecba536] <==
	I1025 10:48:49.066682       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:48:50.005812       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:48:52.506618       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:48:52.514185       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 10:48:52.526241       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:48:52.718143       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:48:52.718218       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:48:52.790097       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:48:52.790783       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:48:52.791029       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:48:52.792346       1 config.go:200] "Starting service config controller"
	I1025 10:48:52.792818       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:48:52.792897       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:48:52.792932       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:48:52.792983       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:48:52.793024       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:48:52.794187       1 config.go:309] "Starting node config controller"
	I1025 10:48:52.794253       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:48:52.794285       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:48:52.893661       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:48:52.893732       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 10:48:52.893820       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [336fcce7f177ee63099c95f463857f65a6c8674b4cae330456af35a66d1e5927] <==
	I1025 10:48:50.322597       1 serving.go:386] Generated self-signed cert in-memory
	I1025 10:48:53.142320       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:48:53.142353       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:48:53.147558       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1025 10:48:53.147599       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1025 10:48:53.147640       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:48:53.147647       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:48:53.147661       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:48:53.147673       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:48:53.148735       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:48:53.148822       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:48:53.254245       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:48:53.254393       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1025 10:48:53.254516       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [56698e4599135d8d0d3a8b15f20fb0fcbcf302ce721ba8c99956c5c54be1673d] <==
	E1025 10:47:44.666554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:47:44.666677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:47:44.675442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 10:47:44.675642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 10:47:45.481181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 10:47:45.501445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 10:47:45.563856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:47:45.593162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:47:45.614985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 10:47:45.674449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 10:47:45.700484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 10:47:45.709079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 10:47:45.712378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:47:45.764205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 10:47:45.783631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 10:47:45.883294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:47:45.910837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:47:45.959477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1025 10:47:48.215009       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:48:38.842045       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1025 10:48:38.842166       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1025 10:48:38.842191       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1025 10:48:38.842222       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:48:38.842376       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1025 10:48:38.842392       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 25 10:48:46 pause-494622 kubelet[1310]: E1025 10:48:46.848650    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-494622\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="870f9d95e8db34b2e3bf140101c93265" pod="kube-system/kube-controller-manager-pause-494622"
	Oct 25 10:48:46 pause-494622 kubelet[1310]: E1025 10:48:46.850418    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-494622\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4c4ce95ad339f04df5a76cf3062661e9" pod="kube-system/kube-scheduler-pause-494622"
	Oct 25 10:48:46 pause-494622 kubelet[1310]: E1025 10:48:46.850725    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-zprkn\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="5aa10493-5fd8-4bf9-b4d7-5ca08b07f0aa" pod="kube-system/kindnet-zprkn"
	Oct 25 10:48:46 pause-494622 kubelet[1310]: E1025 10:48:46.850987    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-hxv7f\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4ede21c9-566e-4bba-881f-5aa690ed4934" pod="kube-system/coredns-66bc5c9577-hxv7f"
	Oct 25 10:48:46 pause-494622 kubelet[1310]: I1025 10:48:46.870677    1310 scope.go:117] "RemoveContainer" containerID="3e2ff0d6a6cab5c143e5ccbf87b8ae6a4d27061da41107c8817afeec41eb6940"
	Oct 25 10:48:46 pause-494622 kubelet[1310]: E1025 10:48:46.871360    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-hxv7f\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4ede21c9-566e-4bba-881f-5aa690ed4934" pod="kube-system/coredns-66bc5c9577-hxv7f"
	Oct 25 10:48:46 pause-494622 kubelet[1310]: E1025 10:48:46.871663    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-494622\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4542b5a9c28d6dd4601ffd75d5f5e92b" pod="kube-system/etcd-pause-494622"
	Oct 25 10:48:46 pause-494622 kubelet[1310]: E1025 10:48:46.875131    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-494622\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="cd24fe48314e95911e7153f5e59b89df" pod="kube-system/kube-apiserver-pause-494622"
	Oct 25 10:48:46 pause-494622 kubelet[1310]: E1025 10:48:46.875428    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-494622\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="870f9d95e8db34b2e3bf140101c93265" pod="kube-system/kube-controller-manager-pause-494622"
	Oct 25 10:48:46 pause-494622 kubelet[1310]: E1025 10:48:46.875680    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-494622\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4c4ce95ad339f04df5a76cf3062661e9" pod="kube-system/kube-scheduler-pause-494622"
	Oct 25 10:48:46 pause-494622 kubelet[1310]: E1025 10:48:46.875958    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tmr4x\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b0951588-0d5e-4c4d-a26e-32fe980890b4" pod="kube-system/kube-proxy-tmr4x"
	Oct 25 10:48:46 pause-494622 kubelet[1310]: E1025 10:48:46.876251    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-zprkn\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="5aa10493-5fd8-4bf9-b4d7-5ca08b07f0aa" pod="kube-system/kindnet-zprkn"
	Oct 25 10:48:52 pause-494622 kubelet[1310]: E1025 10:48:52.307700    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-494622\" is forbidden: User \"system:node:pause-494622\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-494622' and this object" podUID="4542b5a9c28d6dd4601ffd75d5f5e92b" pod="kube-system/etcd-pause-494622"
	Oct 25 10:48:52 pause-494622 kubelet[1310]: E1025 10:48:52.308439    1310 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-494622\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-494622' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 25 10:48:52 pause-494622 kubelet[1310]: E1025 10:48:52.308559    1310 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-494622\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-494622' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 25 10:48:52 pause-494622 kubelet[1310]: E1025 10:48:52.308633    1310 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-494622\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-494622' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 25 10:48:52 pause-494622 kubelet[1310]: E1025 10:48:52.345182    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-494622\" is forbidden: User \"system:node:pause-494622\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-494622' and this object" podUID="cd24fe48314e95911e7153f5e59b89df" pod="kube-system/kube-apiserver-pause-494622"
	Oct 25 10:48:52 pause-494622 kubelet[1310]: E1025 10:48:52.376412    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-494622\" is forbidden: User \"system:node:pause-494622\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-494622' and this object" podUID="870f9d95e8db34b2e3bf140101c93265" pod="kube-system/kube-controller-manager-pause-494622"
	Oct 25 10:48:52 pause-494622 kubelet[1310]: E1025 10:48:52.392254    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-494622\" is forbidden: User \"system:node:pause-494622\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-494622' and this object" podUID="4c4ce95ad339f04df5a76cf3062661e9" pod="kube-system/kube-scheduler-pause-494622"
	Oct 25 10:48:52 pause-494622 kubelet[1310]: E1025 10:48:52.406854    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-tmr4x\" is forbidden: User \"system:node:pause-494622\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-494622' and this object" podUID="b0951588-0d5e-4c4d-a26e-32fe980890b4" pod="kube-system/kube-proxy-tmr4x"
	Oct 25 10:48:52 pause-494622 kubelet[1310]: E1025 10:48:52.422986    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-zprkn\" is forbidden: User \"system:node:pause-494622\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-494622' and this object" podUID="5aa10493-5fd8-4bf9-b4d7-5ca08b07f0aa" pod="kube-system/kindnet-zprkn"
	Oct 25 10:49:07 pause-494622 kubelet[1310]: W1025 10:49:07.800956    1310 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 25 10:49:07 pause-494622 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:49:08 pause-494622 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:49:08 pause-494622 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-494622 -n pause-494622
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-494622 -n pause-494622: exit status 2 (540.454751ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-494622 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-494622
helpers_test.go:243: (dbg) docker inspect pause-494622:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8561594a4e294239bfac943f8b5aa947c95647235e7938e3ba1416b432caf0bc",
	        "Created": "2025-10-25T10:47:18.505812885Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 415967,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:47:18.579543096Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/8561594a4e294239bfac943f8b5aa947c95647235e7938e3ba1416b432caf0bc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8561594a4e294239bfac943f8b5aa947c95647235e7938e3ba1416b432caf0bc/hostname",
	        "HostsPath": "/var/lib/docker/containers/8561594a4e294239bfac943f8b5aa947c95647235e7938e3ba1416b432caf0bc/hosts",
	        "LogPath": "/var/lib/docker/containers/8561594a4e294239bfac943f8b5aa947c95647235e7938e3ba1416b432caf0bc/8561594a4e294239bfac943f8b5aa947c95647235e7938e3ba1416b432caf0bc-json.log",
	        "Name": "/pause-494622",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-494622:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-494622",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8561594a4e294239bfac943f8b5aa947c95647235e7938e3ba1416b432caf0bc",
	                "LowerDir": "/var/lib/docker/overlay2/7cc3bd1ba4fbb850dd711949a736ec073b990dc7d577bb924e008bea21c85970-init/diff:/var/lib/docker/overlay2/a746402fafa86965bd9394d930bb96ec16982037f9f5e7f4f1de6d09576ff851/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7cc3bd1ba4fbb850dd711949a736ec073b990dc7d577bb924e008bea21c85970/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7cc3bd1ba4fbb850dd711949a736ec073b990dc7d577bb924e008bea21c85970/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7cc3bd1ba4fbb850dd711949a736ec073b990dc7d577bb924e008bea21c85970/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-494622",
	                "Source": "/var/lib/docker/volumes/pause-494622/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-494622",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-494622",
	                "name.minikube.sigs.k8s.io": "pause-494622",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8f739a8ad6c0b599900149ebfd50aa15c33cc876a37eea479c29d2a1bad72969",
	            "SandboxKey": "/var/run/docker/netns/8f739a8ad6c0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33383"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33384"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33387"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33385"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33386"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-494622": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:1d:e2:27:6b:7f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d429c36bc6aeac81af962db139f6a44aa42c438edc8e849f181aeb21cc399667",
	                    "EndpointID": "666e17c355efdba59f5f69d9e31743b8181b120e6f0fd90aeed7607f991c2f82",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-494622",
	                        "8561594a4e29"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-494622 -n pause-494622
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-494622 -n pause-494622: exit status 2 (517.059852ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-494622 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-494622 logs -n 25: (1.413896376s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-670512 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-670512       │ jenkins │ v1.37.0 │ 25 Oct 25 10:43 UTC │ 25 Oct 25 10:43 UTC │
	│ start   │ -p missing-upgrade-486371 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-486371    │ jenkins │ v1.32.0 │ 25 Oct 25 10:43 UTC │ 25 Oct 25 10:44 UTC │
	│ start   │ -p NoKubernetes-670512 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-670512       │ jenkins │ v1.37.0 │ 25 Oct 25 10:43 UTC │ 25 Oct 25 10:44 UTC │
	│ start   │ -p missing-upgrade-486371 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-486371    │ jenkins │ v1.37.0 │ 25 Oct 25 10:44 UTC │ 25 Oct 25 10:45 UTC │
	│ delete  │ -p NoKubernetes-670512                                                                                                                   │ NoKubernetes-670512       │ jenkins │ v1.37.0 │ 25 Oct 25 10:44 UTC │ 25 Oct 25 10:44 UTC │
	│ start   │ -p NoKubernetes-670512 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-670512       │ jenkins │ v1.37.0 │ 25 Oct 25 10:44 UTC │ 25 Oct 25 10:44 UTC │
	│ ssh     │ -p NoKubernetes-670512 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-670512       │ jenkins │ v1.37.0 │ 25 Oct 25 10:44 UTC │                     │
	│ stop    │ -p NoKubernetes-670512                                                                                                                   │ NoKubernetes-670512       │ jenkins │ v1.37.0 │ 25 Oct 25 10:44 UTC │ 25 Oct 25 10:44 UTC │
	│ start   │ -p NoKubernetes-670512 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-670512       │ jenkins │ v1.37.0 │ 25 Oct 25 10:44 UTC │ 25 Oct 25 10:44 UTC │
	│ ssh     │ -p NoKubernetes-670512 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-670512       │ jenkins │ v1.37.0 │ 25 Oct 25 10:44 UTC │                     │
	│ delete  │ -p NoKubernetes-670512                                                                                                                   │ NoKubernetes-670512       │ jenkins │ v1.37.0 │ 25 Oct 25 10:44 UTC │ 25 Oct 25 10:44 UTC │
	│ start   │ -p kubernetes-upgrade-291330 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-291330 │ jenkins │ v1.37.0 │ 25 Oct 25 10:44 UTC │ 25 Oct 25 10:45 UTC │
	│ delete  │ -p missing-upgrade-486371                                                                                                                │ missing-upgrade-486371    │ jenkins │ v1.37.0 │ 25 Oct 25 10:45 UTC │ 25 Oct 25 10:45 UTC │
	│ start   │ -p stopped-upgrade-190411 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-190411    │ jenkins │ v1.32.0 │ 25 Oct 25 10:45 UTC │ 25 Oct 25 10:45 UTC │
	│ stop    │ -p kubernetes-upgrade-291330                                                                                                             │ kubernetes-upgrade-291330 │ jenkins │ v1.37.0 │ 25 Oct 25 10:45 UTC │ 25 Oct 25 10:45 UTC │
	│ start   │ -p kubernetes-upgrade-291330 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-291330 │ jenkins │ v1.37.0 │ 25 Oct 25 10:45 UTC │                     │
	│ stop    │ stopped-upgrade-190411 stop                                                                                                              │ stopped-upgrade-190411    │ jenkins │ v1.32.0 │ 25 Oct 25 10:45 UTC │ 25 Oct 25 10:45 UTC │
	│ start   │ -p stopped-upgrade-190411 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-190411    │ jenkins │ v1.37.0 │ 25 Oct 25 10:45 UTC │ 25 Oct 25 10:46 UTC │
	│ delete  │ -p stopped-upgrade-190411                                                                                                                │ stopped-upgrade-190411    │ jenkins │ v1.37.0 │ 25 Oct 25 10:46 UTC │ 25 Oct 25 10:46 UTC │
	│ start   │ -p running-upgrade-031456 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-031456    │ jenkins │ v1.32.0 │ 25 Oct 25 10:46 UTC │ 25 Oct 25 10:46 UTC │
	│ start   │ -p running-upgrade-031456 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-031456    │ jenkins │ v1.37.0 │ 25 Oct 25 10:46 UTC │ 25 Oct 25 10:47 UTC │
	│ delete  │ -p running-upgrade-031456                                                                                                                │ running-upgrade-031456    │ jenkins │ v1.37.0 │ 25 Oct 25 10:47 UTC │ 25 Oct 25 10:47 UTC │
	│ start   │ -p pause-494622 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-494622              │ jenkins │ v1.37.0 │ 25 Oct 25 10:47 UTC │ 25 Oct 25 10:48 UTC │
	│ start   │ -p pause-494622 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-494622              │ jenkins │ v1.37.0 │ 25 Oct 25 10:48 UTC │ 25 Oct 25 10:49 UTC │
	│ pause   │ -p pause-494622 --alsologtostderr -v=5                                                                                                   │ pause-494622              │ jenkins │ v1.37.0 │ 25 Oct 25 10:49 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:48:36
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:48:36.665632  420177 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:48:36.665842  420177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:48:36.665871  420177 out.go:374] Setting ErrFile to fd 2...
	I1025 10:48:36.665893  420177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:48:36.666376  420177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:48:36.667381  420177 out.go:368] Setting JSON to false
	I1025 10:48:36.668364  420177 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9068,"bootTime":1761380249,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 10:48:36.668438  420177 start.go:141] virtualization:  
	I1025 10:48:36.673913  420177 out.go:179] * [pause-494622] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:48:36.676944  420177 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:48:36.677097  420177 notify.go:220] Checking for updates...
	I1025 10:48:36.682728  420177 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:48:36.685593  420177 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:48:36.688639  420177 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 10:48:36.691688  420177 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:48:36.694827  420177 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:48:36.698603  420177 config.go:182] Loaded profile config "pause-494622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:48:36.699397  420177 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:48:36.726096  420177 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:48:36.726268  420177 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:48:36.792574  420177 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 10:48:36.78153526 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:48:36.792692  420177 docker.go:318] overlay module found
	I1025 10:48:36.795880  420177 out.go:179] * Using the docker driver based on existing profile
	I1025 10:48:36.798727  420177 start.go:305] selected driver: docker
	I1025 10:48:36.798759  420177 start.go:925] validating driver "docker" against &{Name:pause-494622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-494622 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:48:36.798892  420177 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:48:36.799029  420177 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:48:36.867538  420177 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 10:48:36.858518127 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:48:36.867967  420177 cni.go:84] Creating CNI manager for ""
	I1025 10:48:36.868032  420177 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:48:36.868081  420177 start.go:349] cluster config:
	{Name:pause-494622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-494622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:48:36.871291  420177 out.go:179] * Starting "pause-494622" primary control-plane node in "pause-494622" cluster
	I1025 10:48:36.874129  420177 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:48:36.877030  420177 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:48:36.879747  420177 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:48:36.879800  420177 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 10:48:36.879814  420177 cache.go:58] Caching tarball of preloaded images
	I1025 10:48:36.879848  420177 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:48:36.879914  420177 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:48:36.879924  420177 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:48:36.880077  420177 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/pause-494622/config.json ...
	I1025 10:48:36.898443  420177 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:48:36.898466  420177 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:48:36.898486  420177 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:48:36.898532  420177 start.go:360] acquireMachinesLock for pause-494622: {Name:mk69e910d428c5e2515675cd602840cb99bca6c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:48:36.898593  420177 start.go:364] duration metric: took 38.261µs to acquireMachinesLock for "pause-494622"
	I1025 10:48:36.898614  420177 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:48:36.898623  420177 fix.go:54] fixHost starting: 
	I1025 10:48:36.898878  420177 cli_runner.go:164] Run: docker container inspect pause-494622 --format={{.State.Status}}
	I1025 10:48:36.915955  420177 fix.go:112] recreateIfNeeded on pause-494622: state=Running err=<nil>
	W1025 10:48:36.915986  420177 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:48:34.294067  407575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:48:34.294546  407575 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 10:48:34.294615  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:48:34.294693  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:48:34.336197  407575 cri.go:89] found id: "9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:34.336219  407575 cri.go:89] found id: ""
	I1025 10:48:34.336228  407575 logs.go:282] 1 containers: [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f]
	I1025 10:48:34.336307  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:34.340674  407575 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:48:34.340786  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:48:34.378735  407575 cri.go:89] found id: ""
	I1025 10:48:34.378763  407575 logs.go:282] 0 containers: []
	W1025 10:48:34.378773  407575 logs.go:284] No container was found matching "etcd"
	I1025 10:48:34.378779  407575 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:48:34.378839  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:48:34.426435  407575 cri.go:89] found id: ""
	I1025 10:48:34.426465  407575 logs.go:282] 0 containers: []
	W1025 10:48:34.426473  407575 logs.go:284] No container was found matching "coredns"
	I1025 10:48:34.426480  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:48:34.426572  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:48:34.468861  407575 cri.go:89] found id: "ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:34.468889  407575 cri.go:89] found id: ""
	I1025 10:48:34.468898  407575 logs.go:282] 1 containers: [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015]
	I1025 10:48:34.468954  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:34.473471  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:48:34.473548  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:48:34.517492  407575 cri.go:89] found id: ""
	I1025 10:48:34.517522  407575 logs.go:282] 0 containers: []
	W1025 10:48:34.517530  407575 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:48:34.517540  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:48:34.517627  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:48:34.549422  407575 cri.go:89] found id: "1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:34.549455  407575 cri.go:89] found id: ""
	I1025 10:48:34.549463  407575 logs.go:282] 1 containers: [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044]
	I1025 10:48:34.549555  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:34.553584  407575 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:48:34.553671  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:48:34.592218  407575 cri.go:89] found id: ""
	I1025 10:48:34.592253  407575 logs.go:282] 0 containers: []
	W1025 10:48:34.592262  407575 logs.go:284] No container was found matching "kindnet"
	I1025 10:48:34.592284  407575 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:48:34.592369  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:48:34.621198  407575 cri.go:89] found id: ""
	I1025 10:48:34.621237  407575 logs.go:282] 0 containers: []
	W1025 10:48:34.621246  407575 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:48:34.621255  407575 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:48:34.621294  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:48:34.716899  407575 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:48:34.716922  407575 logs.go:123] Gathering logs for kube-apiserver [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f] ...
	I1025 10:48:34.716935  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:34.765804  407575 logs.go:123] Gathering logs for kube-scheduler [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015] ...
	I1025 10:48:34.765838  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:34.823684  407575 logs.go:123] Gathering logs for kube-controller-manager [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044] ...
	I1025 10:48:34.823731  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:34.852608  407575 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:48:34.852640  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:48:34.917392  407575 logs.go:123] Gathering logs for container status ...
	I1025 10:48:34.917422  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:48:34.970653  407575 logs.go:123] Gathering logs for kubelet ...
	I1025 10:48:34.970681  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:48:35.094676  407575 logs.go:123] Gathering logs for dmesg ...
	I1025 10:48:35.094714  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:48:36.919252  420177 out.go:252] * Updating the running docker "pause-494622" container ...
	I1025 10:48:36.919288  420177 machine.go:93] provisionDockerMachine start ...
	I1025 10:48:36.919385  420177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-494622
	I1025 10:48:36.936950  420177 main.go:141] libmachine: Using SSH client type: native
	I1025 10:48:36.937286  420177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1025 10:48:36.937296  420177 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:48:37.101485  420177 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-494622
	
	I1025 10:48:37.101511  420177 ubuntu.go:182] provisioning hostname "pause-494622"
	I1025 10:48:37.101575  420177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-494622
	I1025 10:48:37.119455  420177 main.go:141] libmachine: Using SSH client type: native
	I1025 10:48:37.119779  420177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1025 10:48:37.119791  420177 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-494622 && echo "pause-494622" | sudo tee /etc/hostname
	I1025 10:48:37.283195  420177 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-494622
	
	I1025 10:48:37.283288  420177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-494622
	I1025 10:48:37.300664  420177 main.go:141] libmachine: Using SSH client type: native
	I1025 10:48:37.300973  420177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1025 10:48:37.300993  420177 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-494622' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-494622/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-494622' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:48:37.450412  420177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:48:37.450439  420177 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:48:37.450459  420177 ubuntu.go:190] setting up certificates
	I1025 10:48:37.450469  420177 provision.go:84] configureAuth start
	I1025 10:48:37.450548  420177 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-494622
	I1025 10:48:37.470268  420177 provision.go:143] copyHostCerts
	I1025 10:48:37.470340  420177 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:48:37.470358  420177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:48:37.470440  420177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:48:37.470553  420177 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:48:37.470564  420177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:48:37.470591  420177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:48:37.470651  420177 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:48:37.470659  420177 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:48:37.470683  420177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:48:37.470737  420177 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.pause-494622 san=[127.0.0.1 192.168.85.2 localhost minikube pause-494622]
	I1025 10:48:38.453397  420177 provision.go:177] copyRemoteCerts
	I1025 10:48:38.453457  420177 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:48:38.453502  420177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-494622
	I1025 10:48:38.475745  420177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/pause-494622/id_rsa Username:docker}
	I1025 10:48:38.591676  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:48:38.610267  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 10:48:38.628844  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:48:38.648176  420177 provision.go:87] duration metric: took 1.19768257s to configureAuth
	I1025 10:48:38.648203  420177 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:48:38.648412  420177 config.go:182] Loaded profile config "pause-494622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:48:38.648591  420177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-494622
	I1025 10:48:38.665309  420177 main.go:141] libmachine: Using SSH client type: native
	I1025 10:48:38.665627  420177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1025 10:48:38.665647  420177 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:48:37.613621  407575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:48:37.614042  407575 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 10:48:37.614083  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:48:37.614136  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:48:37.658485  407575 cri.go:89] found id: "9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:37.658511  407575 cri.go:89] found id: ""
	I1025 10:48:37.658526  407575 logs.go:282] 1 containers: [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f]
	I1025 10:48:37.658590  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:37.663629  407575 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:48:37.663724  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:48:37.748596  407575 cri.go:89] found id: ""
	I1025 10:48:37.748620  407575 logs.go:282] 0 containers: []
	W1025 10:48:37.748637  407575 logs.go:284] No container was found matching "etcd"
	I1025 10:48:37.748643  407575 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:48:37.748716  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:48:37.787592  407575 cri.go:89] found id: ""
	I1025 10:48:37.787615  407575 logs.go:282] 0 containers: []
	W1025 10:48:37.787623  407575 logs.go:284] No container was found matching "coredns"
	I1025 10:48:37.787629  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:48:37.787686  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:48:37.834201  407575 cri.go:89] found id: "ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:37.834232  407575 cri.go:89] found id: ""
	I1025 10:48:37.834240  407575 logs.go:282] 1 containers: [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015]
	I1025 10:48:37.834314  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:37.840204  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:48:37.840302  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:48:37.884578  407575 cri.go:89] found id: ""
	I1025 10:48:37.884657  407575 logs.go:282] 0 containers: []
	W1025 10:48:37.884680  407575 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:48:37.884711  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:48:37.884842  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:48:37.917392  407575 cri.go:89] found id: "1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:37.917411  407575 cri.go:89] found id: ""
	I1025 10:48:37.917419  407575 logs.go:282] 1 containers: [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044]
	I1025 10:48:37.917481  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:37.921268  407575 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:48:37.921339  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:48:37.952332  407575 cri.go:89] found id: ""
	I1025 10:48:37.952354  407575 logs.go:282] 0 containers: []
	W1025 10:48:37.952363  407575 logs.go:284] No container was found matching "kindnet"
	I1025 10:48:37.952370  407575 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:48:37.952495  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:48:37.988299  407575 cri.go:89] found id: ""
	I1025 10:48:37.988320  407575 logs.go:282] 0 containers: []
	W1025 10:48:37.988328  407575 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:48:37.988337  407575 logs.go:123] Gathering logs for kube-apiserver [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f] ...
	I1025 10:48:37.988348  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:38.031930  407575 logs.go:123] Gathering logs for kube-scheduler [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015] ...
	I1025 10:48:38.032040  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:38.114746  407575 logs.go:123] Gathering logs for kube-controller-manager [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044] ...
	I1025 10:48:38.114824  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:38.156161  407575 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:48:38.156191  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:48:38.222174  407575 logs.go:123] Gathering logs for container status ...
	I1025 10:48:38.222214  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:48:38.284269  407575 logs.go:123] Gathering logs for kubelet ...
	I1025 10:48:38.284297  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:48:38.435760  407575 logs.go:123] Gathering logs for dmesg ...
	I1025 10:48:38.435805  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:48:38.457745  407575 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:48:38.457838  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:48:38.565724  407575 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:48:41.065815  407575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:48:41.066333  407575 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 10:48:41.066385  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:48:41.066445  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:48:41.094971  407575 cri.go:89] found id: "9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:41.094995  407575 cri.go:89] found id: ""
	I1025 10:48:41.095003  407575 logs.go:282] 1 containers: [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f]
	I1025 10:48:41.095067  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:41.098624  407575 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:48:41.098700  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:48:41.124322  407575 cri.go:89] found id: ""
	I1025 10:48:41.124344  407575 logs.go:282] 0 containers: []
	W1025 10:48:41.124352  407575 logs.go:284] No container was found matching "etcd"
	I1025 10:48:41.124359  407575 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:48:41.124417  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:48:41.150155  407575 cri.go:89] found id: ""
	I1025 10:48:41.150179  407575 logs.go:282] 0 containers: []
	W1025 10:48:41.150188  407575 logs.go:284] No container was found matching "coredns"
	I1025 10:48:41.150195  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:48:41.150254  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:48:41.179154  407575 cri.go:89] found id: "ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:41.179183  407575 cri.go:89] found id: ""
	I1025 10:48:41.179191  407575 logs.go:282] 1 containers: [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015]
	I1025 10:48:41.179251  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:41.182864  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:48:41.182938  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:48:41.209602  407575 cri.go:89] found id: ""
	I1025 10:48:41.209628  407575 logs.go:282] 0 containers: []
	W1025 10:48:41.209637  407575 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:48:41.209645  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:48:41.209705  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:48:41.247789  407575 cri.go:89] found id: "1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:41.247810  407575 cri.go:89] found id: ""
	I1025 10:48:41.247818  407575 logs.go:282] 1 containers: [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044]
	I1025 10:48:41.247874  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:41.251703  407575 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:48:41.251775  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:48:41.278847  407575 cri.go:89] found id: ""
	I1025 10:48:41.278871  407575 logs.go:282] 0 containers: []
	W1025 10:48:41.278880  407575 logs.go:284] No container was found matching "kindnet"
	I1025 10:48:41.278887  407575 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:48:41.278948  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:48:41.308907  407575 cri.go:89] found id: ""
	I1025 10:48:41.308929  407575 logs.go:282] 0 containers: []
	W1025 10:48:41.308938  407575 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:48:41.308947  407575 logs.go:123] Gathering logs for kube-scheduler [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015] ...
	I1025 10:48:41.308959  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:41.379966  407575 logs.go:123] Gathering logs for kube-controller-manager [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044] ...
	I1025 10:48:41.380002  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:41.405781  407575 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:48:41.405813  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:48:41.464082  407575 logs.go:123] Gathering logs for container status ...
	I1025 10:48:41.464114  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:48:41.494371  407575 logs.go:123] Gathering logs for kubelet ...
	I1025 10:48:41.494403  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:48:41.608100  407575 logs.go:123] Gathering logs for dmesg ...
	I1025 10:48:41.608138  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:48:41.630736  407575 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:48:41.630787  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:48:41.709965  407575 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:48:41.710021  407575 logs.go:123] Gathering logs for kube-apiserver [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f] ...
	I1025 10:48:41.710035  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:43.997521  420177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:48:43.997547  420177 machine.go:96] duration metric: took 7.078250117s to provisionDockerMachine
	I1025 10:48:43.997560  420177 start.go:293] postStartSetup for "pause-494622" (driver="docker")
	I1025 10:48:43.997571  420177 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:48:43.997640  420177 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:48:43.997700  420177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-494622
	I1025 10:48:44.017956  420177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/pause-494622/id_rsa Username:docker}
	I1025 10:48:44.126044  420177 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:48:44.129436  420177 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:48:44.129466  420177 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:48:44.129477  420177 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:48:44.129532  420177 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:48:44.129622  420177 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:48:44.129741  420177 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:48:44.137226  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:48:44.155087  420177 start.go:296] duration metric: took 157.512154ms for postStartSetup
	I1025 10:48:44.155171  420177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:48:44.155238  420177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-494622
	I1025 10:48:44.172794  420177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/pause-494622/id_rsa Username:docker}
	I1025 10:48:44.277229  420177 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:48:44.285283  420177 fix.go:56] duration metric: took 7.386643463s for fixHost
	I1025 10:48:44.285305  420177 start.go:83] releasing machines lock for "pause-494622", held for 7.386701884s
	I1025 10:48:44.285375  420177 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-494622
	I1025 10:48:44.304852  420177 ssh_runner.go:195] Run: cat /version.json
	I1025 10:48:44.304903  420177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-494622
	I1025 10:48:44.305175  420177 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:48:44.305230  420177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-494622
	I1025 10:48:44.342356  420177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/pause-494622/id_rsa Username:docker}
	I1025 10:48:44.348248  420177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/pause-494622/id_rsa Username:docker}
	I1025 10:48:44.466240  420177 ssh_runner.go:195] Run: systemctl --version
	I1025 10:48:44.565845  420177 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:48:44.620137  420177 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:48:44.628162  420177 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:48:44.628237  420177 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:48:44.638542  420177 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:48:44.638573  420177 start.go:495] detecting cgroup driver to use...
	I1025 10:48:44.638606  420177 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:48:44.638666  420177 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:48:44.663975  420177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:48:44.682569  420177 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:48:44.682641  420177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:48:44.704469  420177 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:48:44.719329  420177 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:48:44.908590  420177 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:48:45.100215  420177 docker.go:234] disabling docker service ...
	I1025 10:48:45.100316  420177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:48:45.132862  420177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:48:45.151815  420177 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:48:45.333705  420177 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:48:45.514748  420177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:48:45.528517  420177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:48:45.544533  420177 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:48:45.544627  420177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:48:45.554640  420177 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:48:45.554743  420177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:48:45.564491  420177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:48:45.574327  420177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:48:45.583903  420177 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:48:45.593207  420177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:48:45.603025  420177 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:48:45.612551  420177 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:48:45.622075  420177 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:48:45.630267  420177 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:48:45.638184  420177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:48:45.774525  420177 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:48:45.958460  420177 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:48:45.958595  420177 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:48:45.963656  420177 start.go:563] Will wait 60s for crictl version
	I1025 10:48:45.963744  420177 ssh_runner.go:195] Run: which crictl
	I1025 10:48:45.967597  420177 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:48:46.004626  420177 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:48:46.004735  420177 ssh_runner.go:195] Run: crio --version
	I1025 10:48:46.035525  420177 ssh_runner.go:195] Run: crio --version
	I1025 10:48:46.074469  420177 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:48:46.077305  420177 cli_runner.go:164] Run: docker network inspect pause-494622 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:48:46.094130  420177 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 10:48:46.098553  420177 kubeadm.go:883] updating cluster {Name:pause-494622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-494622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:48:46.098709  420177 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:48:46.098771  420177 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:48:46.132479  420177 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:48:46.132504  420177 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:48:46.132566  420177 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:48:46.158479  420177 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:48:46.158509  420177 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:48:46.158518  420177 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1025 10:48:46.158624  420177 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-494622 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-494622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:48:46.158707  420177 ssh_runner.go:195] Run: crio config
	I1025 10:48:46.238449  420177 cni.go:84] Creating CNI manager for ""
	I1025 10:48:46.238536  420177 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:48:46.238578  420177 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:48:46.238631  420177 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-494622 NodeName:pause-494622 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:48:46.238810  420177 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-494622"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:48:46.238904  420177 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:48:46.247438  420177 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:48:46.247590  420177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:48:46.255327  420177 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1025 10:48:46.268523  420177 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:48:46.282281  420177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1025 10:48:46.295265  420177 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:48:46.299131  420177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:48:46.433360  420177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:48:46.447370  420177 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/pause-494622 for IP: 192.168.85.2
	I1025 10:48:46.447390  420177 certs.go:195] generating shared ca certs ...
	I1025 10:48:46.447417  420177 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:48:46.447603  420177 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:48:46.447679  420177 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:48:46.447714  420177 certs.go:257] generating profile certs ...
	I1025 10:48:46.447849  420177 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/pause-494622/client.key
	I1025 10:48:46.447971  420177 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/pause-494622/apiserver.key.46e526a6
	I1025 10:48:46.448055  420177 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/pause-494622/proxy-client.key
	I1025 10:48:46.448201  420177 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:48:46.448256  420177 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:48:46.448281  420177 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:48:46.448338  420177 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:48:46.448398  420177 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:48:46.448463  420177 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:48:46.448536  420177 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:48:46.450884  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:48:46.470066  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:48:46.488986  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:48:46.507932  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:48:46.526803  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/pause-494622/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 10:48:46.546107  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/pause-494622/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 10:48:46.564119  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/pause-494622/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:48:46.582375  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/pause-494622/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:48:46.599965  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:48:46.618555  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:48:46.637238  420177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:48:46.655232  420177 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:48:44.248165  407575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:48:44.248606  407575 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 10:48:44.248653  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:48:44.248719  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:48:44.279379  407575 cri.go:89] found id: "9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:44.279403  407575 cri.go:89] found id: ""
	I1025 10:48:44.279411  407575 logs.go:282] 1 containers: [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f]
	I1025 10:48:44.279469  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:44.286082  407575 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:48:44.286152  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:48:44.319593  407575 cri.go:89] found id: ""
	I1025 10:48:44.319621  407575 logs.go:282] 0 containers: []
	W1025 10:48:44.319630  407575 logs.go:284] No container was found matching "etcd"
	I1025 10:48:44.319637  407575 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:48:44.319697  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:48:44.376453  407575 cri.go:89] found id: ""
	I1025 10:48:44.376481  407575 logs.go:282] 0 containers: []
	W1025 10:48:44.376489  407575 logs.go:284] No container was found matching "coredns"
	I1025 10:48:44.376496  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:48:44.376560  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:48:44.408932  407575 cri.go:89] found id: "ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:44.408958  407575 cri.go:89] found id: ""
	I1025 10:48:44.408967  407575 logs.go:282] 1 containers: [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015]
	I1025 10:48:44.409040  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:44.417934  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:48:44.418058  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:48:44.446859  407575 cri.go:89] found id: ""
	I1025 10:48:44.446885  407575 logs.go:282] 0 containers: []
	W1025 10:48:44.446893  407575 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:48:44.446900  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:48:44.447040  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:48:44.485246  407575 cri.go:89] found id: "1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:44.485274  407575 cri.go:89] found id: ""
	I1025 10:48:44.485283  407575 logs.go:282] 1 containers: [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044]
	I1025 10:48:44.485340  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:44.489328  407575 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:48:44.489419  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:48:44.534289  407575 cri.go:89] found id: ""
	I1025 10:48:44.534319  407575 logs.go:282] 0 containers: []
	W1025 10:48:44.534328  407575 logs.go:284] No container was found matching "kindnet"
	I1025 10:48:44.534335  407575 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:48:44.534401  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:48:44.569331  407575 cri.go:89] found id: ""
	I1025 10:48:44.569358  407575 logs.go:282] 0 containers: []
	W1025 10:48:44.569368  407575 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:48:44.569376  407575 logs.go:123] Gathering logs for kube-apiserver [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f] ...
	I1025 10:48:44.569388  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:44.607064  407575 logs.go:123] Gathering logs for kube-scheduler [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015] ...
	I1025 10:48:44.607101  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:44.687832  407575 logs.go:123] Gathering logs for kube-controller-manager [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044] ...
	I1025 10:48:44.687866  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:44.725633  407575 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:48:44.725714  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:48:44.800337  407575 logs.go:123] Gathering logs for container status ...
	I1025 10:48:44.800377  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:48:44.841185  407575 logs.go:123] Gathering logs for kubelet ...
	I1025 10:48:44.841254  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:48:44.983504  407575 logs.go:123] Gathering logs for dmesg ...
	I1025 10:48:44.983620  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:48:45.003479  407575 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:48:45.003763  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:48:45.124615  407575 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:48:46.668844  420177 ssh_runner.go:195] Run: openssl version
	I1025 10:48:46.675491  420177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:48:46.684385  420177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:48:46.693243  420177 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:48:46.693310  420177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:48:46.816123  420177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:48:46.839416  420177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:48:46.859369  420177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:48:46.867260  420177 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:48:46.867325  420177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:48:46.984667  420177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:48:47.005650  420177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:48:47.031087  420177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:48:47.043380  420177 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:48:47.043446  420177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:48:47.115799  420177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:48:47.124871  420177 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:48:47.134372  420177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:48:47.198891  420177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:48:47.266586  420177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:48:47.330936  420177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:48:47.391059  420177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:48:47.442423  420177 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:48:47.503589  420177 kubeadm.go:400] StartCluster: {Name:pause-494622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-494622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:48:47.503710  420177 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:48:47.503788  420177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:48:47.545934  420177 cri.go:89] found id: "96c83536eb5174d663ace1cf0fadd0dacdd66f200c39a0bdd9cc3de480273b9a"
	I1025 10:48:47.545959  420177 cri.go:89] found id: "0411316bf38375595d1010d1a674f0717b0b515e8d8abbbff9e7ea89d8444814"
	I1025 10:48:47.545964  420177 cri.go:89] found id: "336fcce7f177ee63099c95f463857f65a6c8674b4cae330456af35a66d1e5927"
	I1025 10:48:47.545967  420177 cri.go:89] found id: "6a52960107a32d9d63c9a726cde40d6bc306416bd0198608ded2c7804daad2a9"
	I1025 10:48:47.545971  420177 cri.go:89] found id: "a2575b358a4844d89fec42a8040e731bf10578ac0841857c5d57c9f3d436492e"
	I1025 10:48:47.545974  420177 cri.go:89] found id: "9dd9f6a0583890e7ee49e45dee555a894f7bebf9e5043ef4e4d76611b6528f01"
	I1025 10:48:47.545978  420177 cri.go:89] found id: "0dacb3499bb498eb60afdc5550e70098c64ba1e92df1f33f6f5990e014b49766"
	I1025 10:48:47.546024  420177 cri.go:89] found id: "0a2ac9c53256707ef6dd02317248b4d542d804a6bc9fa4ffe7fcf73c2e0e74ba"
	I1025 10:48:47.546027  420177 cri.go:89] found id: "3e2ff0d6a6cab5c143e5ccbf87b8ae6a4d27061da41107c8817afeec41eb6940"
	I1025 10:48:47.546036  420177 cri.go:89] found id: "5a391da839348564e6c59f05bd1af2867b2ba66f17ea3ba8731f53c762dce341"
	I1025 10:48:47.546043  420177 cri.go:89] found id: "4f17ef8ba1aa56544d98deddadc6648233f1aa7f176fb6f9cb061a02e556af0f"
	I1025 10:48:47.546047  420177 cri.go:89] found id: "ee7dbc55c95114fc27b76a23f146b7b3cdf19a29f316645a7438a38ba79d5fca"
	I1025 10:48:47.546050  420177 cri.go:89] found id: "56698e4599135d8d0d3a8b15f20fb0fcbcf302ce721ba8c99956c5c54be1673d"
	I1025 10:48:47.546053  420177 cri.go:89] found id: ""
	I1025 10:48:47.546105  420177 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:48:47.558984  420177 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:48:47Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:48:47.559088  420177 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:48:47.570998  420177 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:48:47.571022  420177 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:48:47.571078  420177 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:48:47.590482  420177 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:48:47.591133  420177 kubeconfig.go:125] found "pause-494622" server: "https://192.168.85.2:8443"
	I1025 10:48:47.591937  420177 kapi.go:59] client config for pause-494622: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/pause-494622/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/pause-494622/client.key", CAFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 10:48:47.592420  420177 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1025 10:48:47.592436  420177 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1025 10:48:47.592442  420177 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 10:48:47.592451  420177 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1025 10:48:47.592456  420177 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 10:48:47.592823  420177 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:48:47.624037  420177 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1025 10:48:47.624073  420177 kubeadm.go:601] duration metric: took 53.044306ms to restartPrimaryControlPlane
	I1025 10:48:47.624083  420177 kubeadm.go:402] duration metric: took 120.504497ms to StartCluster
	I1025 10:48:47.624098  420177 settings.go:142] acquiring lock: {Name:mk78071aea90144fb2e8f63b90de4792d7a3522a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:48:47.624160  420177 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:48:47.625100  420177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:48:47.625323  420177 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:48:47.626449  420177 config.go:182] Loaded profile config "pause-494622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:48:47.626552  420177 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:48:47.633564  420177 out.go:179] * Verifying Kubernetes components...
	I1025 10:48:47.633681  420177 out.go:179] * Enabled addons: 
	I1025 10:48:47.637976  420177 addons.go:514] duration metric: took 11.420491ms for enable addons: enabled=[]
	I1025 10:48:47.638121  420177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:48:48.044329  420177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:48:48.082101  420177 node_ready.go:35] waiting up to 6m0s for node "pause-494622" to be "Ready" ...
	I1025 10:48:47.625433  407575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:48:47.625760  407575 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 10:48:47.625797  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:48:47.625849  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:48:47.680673  407575 cri.go:89] found id: "9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:47.680694  407575 cri.go:89] found id: ""
	I1025 10:48:47.680702  407575 logs.go:282] 1 containers: [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f]
	I1025 10:48:47.680763  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:47.684864  407575 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:48:47.684940  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:48:47.733200  407575 cri.go:89] found id: ""
	I1025 10:48:47.733225  407575 logs.go:282] 0 containers: []
	W1025 10:48:47.733233  407575 logs.go:284] No container was found matching "etcd"
	I1025 10:48:47.733243  407575 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:48:47.733299  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:48:47.776490  407575 cri.go:89] found id: ""
	I1025 10:48:47.776512  407575 logs.go:282] 0 containers: []
	W1025 10:48:47.776521  407575 logs.go:284] No container was found matching "coredns"
	I1025 10:48:47.776527  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:48:47.776586  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:48:47.824447  407575 cri.go:89] found id: "ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:47.824467  407575 cri.go:89] found id: ""
	I1025 10:48:47.824475  407575 logs.go:282] 1 containers: [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015]
	I1025 10:48:47.824531  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:47.828576  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:48:47.828647  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:48:47.879152  407575 cri.go:89] found id: ""
	I1025 10:48:47.879174  407575 logs.go:282] 0 containers: []
	W1025 10:48:47.879183  407575 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:48:47.879192  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:48:47.879251  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:48:47.925520  407575 cri.go:89] found id: "1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:47.925584  407575 cri.go:89] found id: ""
	I1025 10:48:47.925596  407575 logs.go:282] 1 containers: [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044]
	I1025 10:48:47.925653  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:47.929563  407575 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:48:47.929690  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:48:47.977060  407575 cri.go:89] found id: ""
	I1025 10:48:47.977136  407575 logs.go:282] 0 containers: []
	W1025 10:48:47.977161  407575 logs.go:284] No container was found matching "kindnet"
	I1025 10:48:47.977188  407575 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:48:47.977304  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:48:48.016817  407575 cri.go:89] found id: ""
	I1025 10:48:48.016896  407575 logs.go:282] 0 containers: []
	W1025 10:48:48.016921  407575 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:48:48.016948  407575 logs.go:123] Gathering logs for dmesg ...
	I1025 10:48:48.017031  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:48:48.052352  407575 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:48:48.052436  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:48:48.159870  407575 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:48:48.159892  407575 logs.go:123] Gathering logs for kube-apiserver [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f] ...
	I1025 10:48:48.159908  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:48.197106  407575 logs.go:123] Gathering logs for kube-scheduler [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015] ...
	I1025 10:48:48.197142  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:48.282024  407575 logs.go:123] Gathering logs for kube-controller-manager [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044] ...
	I1025 10:48:48.282067  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:48.319979  407575 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:48:48.320011  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:48:48.399770  407575 logs.go:123] Gathering logs for container status ...
	I1025 10:48:48.399808  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:48:48.494989  407575 logs.go:123] Gathering logs for kubelet ...
	I1025 10:48:48.495018  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:48:51.187924  407575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:48:51.188330  407575 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 10:48:51.188374  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:48:51.188437  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:48:51.271734  407575 cri.go:89] found id: "9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:51.271756  407575 cri.go:89] found id: ""
	I1025 10:48:51.271765  407575 logs.go:282] 1 containers: [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f]
	I1025 10:48:51.271824  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:51.278673  407575 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:48:51.278751  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:48:51.337197  407575 cri.go:89] found id: ""
	I1025 10:48:51.337226  407575 logs.go:282] 0 containers: []
	W1025 10:48:51.337235  407575 logs.go:284] No container was found matching "etcd"
	I1025 10:48:51.337241  407575 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:48:51.337298  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:48:51.375970  407575 cri.go:89] found id: ""
	I1025 10:48:51.375996  407575 logs.go:282] 0 containers: []
	W1025 10:48:51.376004  407575 logs.go:284] No container was found matching "coredns"
	I1025 10:48:51.376011  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:48:51.376065  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:48:51.426537  407575 cri.go:89] found id: "ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:51.426562  407575 cri.go:89] found id: ""
	I1025 10:48:51.426571  407575 logs.go:282] 1 containers: [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015]
	I1025 10:48:51.426627  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:51.432384  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:48:51.432459  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:48:51.480288  407575 cri.go:89] found id: ""
	I1025 10:48:51.480315  407575 logs.go:282] 0 containers: []
	W1025 10:48:51.480331  407575 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:48:51.480338  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:48:51.480397  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:48:51.524869  407575 cri.go:89] found id: "1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:51.524904  407575 cri.go:89] found id: ""
	I1025 10:48:51.524912  407575 logs.go:282] 1 containers: [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044]
	I1025 10:48:51.524976  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:51.533331  407575 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:48:51.533414  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:48:51.576821  407575 cri.go:89] found id: ""
	I1025 10:48:51.576857  407575 logs.go:282] 0 containers: []
	W1025 10:48:51.576866  407575 logs.go:284] No container was found matching "kindnet"
	I1025 10:48:51.576873  407575 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:48:51.576942  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:48:51.616988  407575 cri.go:89] found id: ""
	I1025 10:48:51.617024  407575 logs.go:282] 0 containers: []
	W1025 10:48:51.617033  407575 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:48:51.617043  407575 logs.go:123] Gathering logs for dmesg ...
	I1025 10:48:51.617055  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:48:51.640072  407575 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:48:51.640114  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:48:51.754735  407575 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:48:51.754761  407575 logs.go:123] Gathering logs for kube-apiserver [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f] ...
	I1025 10:48:51.754776  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:51.819978  407575 logs.go:123] Gathering logs for kube-scheduler [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015] ...
	I1025 10:48:51.820019  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:51.923222  407575 logs.go:123] Gathering logs for kube-controller-manager [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044] ...
	I1025 10:48:51.923259  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:51.982535  407575 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:48:51.982565  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:48:52.052047  407575 logs.go:123] Gathering logs for container status ...
	I1025 10:48:52.052089  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:48:52.107382  407575 logs.go:123] Gathering logs for kubelet ...
	I1025 10:48:52.107416  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:48:52.377280  420177 node_ready.go:49] node "pause-494622" is "Ready"
	I1025 10:48:52.377311  420177 node_ready.go:38] duration metric: took 4.295111717s for node "pause-494622" to be "Ready" ...
	I1025 10:48:52.377326  420177 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:48:52.377384  420177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:48:52.394632  420177 api_server.go:72] duration metric: took 4.76926334s to wait for apiserver process to appear ...
	I1025 10:48:52.394654  420177 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:48:52.394674  420177 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 10:48:52.416402  420177 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 10:48:52.416427  420177 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 10:48:52.894778  420177 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 10:48:52.907901  420177 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:48:52.907927  420177 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:48:53.395525  420177 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 10:48:53.403643  420177 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 10:48:53.405026  420177 api_server.go:141] control plane version: v1.34.1
	I1025 10:48:53.405058  420177 api_server.go:131] duration metric: took 1.010396295s to wait for apiserver health ...
	I1025 10:48:53.405068  420177 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:48:53.410387  420177 system_pods.go:59] 7 kube-system pods found
	I1025 10:48:53.410432  420177 system_pods.go:61] "coredns-66bc5c9577-hxv7f" [4ede21c9-566e-4bba-881f-5aa690ed4934] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:48:53.410442  420177 system_pods.go:61] "etcd-pause-494622" [c254a2ab-dcbd-4d7b-838c-7a91485f45fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:48:53.410474  420177 system_pods.go:61] "kindnet-zprkn" [5aa10493-5fd8-4bf9-b4d7-5ca08b07f0aa] Running
	I1025 10:48:53.410496  420177 system_pods.go:61] "kube-apiserver-pause-494622" [ed4419cb-f4c5-497b-a154-4e254454f220] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:48:53.410518  420177 system_pods.go:61] "kube-controller-manager-pause-494622" [d5fa5bd3-5558-4b1c-8c16-cd3f4979d38b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:48:53.410523  420177 system_pods.go:61] "kube-proxy-tmr4x" [b0951588-0d5e-4c4d-a26e-32fe980890b4] Running
	I1025 10:48:53.410530  420177 system_pods.go:61] "kube-scheduler-pause-494622" [4547db9f-4029-4148-8737-db0dfb5f30b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:48:53.410539  420177 system_pods.go:74] duration metric: took 5.465201ms to wait for pod list to return data ...
	I1025 10:48:53.410548  420177 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:48:53.416865  420177 default_sa.go:45] found service account: "default"
	I1025 10:48:53.416902  420177 default_sa.go:55] duration metric: took 6.346654ms for default service account to be created ...
	I1025 10:48:53.416913  420177 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:48:53.423405  420177 system_pods.go:86] 7 kube-system pods found
	I1025 10:48:53.423441  420177 system_pods.go:89] "coredns-66bc5c9577-hxv7f" [4ede21c9-566e-4bba-881f-5aa690ed4934] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:48:53.423451  420177 system_pods.go:89] "etcd-pause-494622" [c254a2ab-dcbd-4d7b-838c-7a91485f45fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:48:53.423479  420177 system_pods.go:89] "kindnet-zprkn" [5aa10493-5fd8-4bf9-b4d7-5ca08b07f0aa] Running
	I1025 10:48:53.423491  420177 system_pods.go:89] "kube-apiserver-pause-494622" [ed4419cb-f4c5-497b-a154-4e254454f220] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:48:53.423499  420177 system_pods.go:89] "kube-controller-manager-pause-494622" [d5fa5bd3-5558-4b1c-8c16-cd3f4979d38b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:48:53.423507  420177 system_pods.go:89] "kube-proxy-tmr4x" [b0951588-0d5e-4c4d-a26e-32fe980890b4] Running
	I1025 10:48:53.423532  420177 system_pods.go:89] "kube-scheduler-pause-494622" [4547db9f-4029-4148-8737-db0dfb5f30b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:48:53.423557  420177 system_pods.go:126] duration metric: took 6.636953ms to wait for k8s-apps to be running ...
	I1025 10:48:53.423572  420177 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:48:53.423642  420177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:48:53.437355  420177 system_svc.go:56] duration metric: took 13.774023ms WaitForService to wait for kubelet
	I1025 10:48:53.437432  420177 kubeadm.go:586] duration metric: took 5.812075649s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:48:53.437471  420177 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:48:53.440982  420177 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:48:53.441074  420177 node_conditions.go:123] node cpu capacity is 2
	I1025 10:48:53.441103  420177 node_conditions.go:105] duration metric: took 3.612441ms to run NodePressure ...
	I1025 10:48:53.441123  420177 start.go:241] waiting for startup goroutines ...
	I1025 10:48:53.441132  420177 start.go:246] waiting for cluster config update ...
	I1025 10:48:53.441151  420177 start.go:255] writing updated cluster config ...
	I1025 10:48:53.441481  420177 ssh_runner.go:195] Run: rm -f paused
	I1025 10:48:53.445506  420177 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:48:53.446199  420177 kapi.go:59] client config for pause-494622: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/pause-494622/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/profiles/pause-494622/client.key", CAFile:"/home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 10:48:53.449667  420177 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hxv7f" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:48:55.470600  420177 pod_ready.go:104] pod "coredns-66bc5c9577-hxv7f" is not "Ready", error: <nil>
	I1025 10:48:54.775143  407575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:48:54.775612  407575 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 10:48:54.775663  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:48:54.775724  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:48:54.817375  407575 cri.go:89] found id: "9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:54.817402  407575 cri.go:89] found id: ""
	I1025 10:48:54.817411  407575 logs.go:282] 1 containers: [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f]
	I1025 10:48:54.817466  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:54.821259  407575 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:48:54.821343  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:48:54.853775  407575 cri.go:89] found id: ""
	I1025 10:48:54.853802  407575 logs.go:282] 0 containers: []
	W1025 10:48:54.853811  407575 logs.go:284] No container was found matching "etcd"
	I1025 10:48:54.853818  407575 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:48:54.853877  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:48:54.886067  407575 cri.go:89] found id: ""
	I1025 10:48:54.886094  407575 logs.go:282] 0 containers: []
	W1025 10:48:54.886103  407575 logs.go:284] No container was found matching "coredns"
	I1025 10:48:54.886109  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:48:54.886169  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:48:54.914393  407575 cri.go:89] found id: "ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:54.914415  407575 cri.go:89] found id: ""
	I1025 10:48:54.914423  407575 logs.go:282] 1 containers: [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015]
	I1025 10:48:54.914481  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:54.918282  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:48:54.918373  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:48:54.946544  407575 cri.go:89] found id: ""
	I1025 10:48:54.946570  407575 logs.go:282] 0 containers: []
	W1025 10:48:54.946579  407575 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:48:54.946587  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:48:54.946650  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:48:54.989737  407575 cri.go:89] found id: "1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:54.989760  407575 cri.go:89] found id: ""
	I1025 10:48:54.989768  407575 logs.go:282] 1 containers: [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044]
	I1025 10:48:54.989829  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:54.993593  407575 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:48:54.993695  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:48:55.028699  407575 cri.go:89] found id: ""
	I1025 10:48:55.028730  407575 logs.go:282] 0 containers: []
	W1025 10:48:55.028738  407575 logs.go:284] No container was found matching "kindnet"
	I1025 10:48:55.028745  407575 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:48:55.028810  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:48:55.056132  407575 cri.go:89] found id: ""
	I1025 10:48:55.056159  407575 logs.go:282] 0 containers: []
	W1025 10:48:55.056167  407575 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:48:55.056178  407575 logs.go:123] Gathering logs for container status ...
	I1025 10:48:55.056189  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:48:55.089964  407575 logs.go:123] Gathering logs for kubelet ...
	I1025 10:48:55.090074  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:48:55.213937  407575 logs.go:123] Gathering logs for dmesg ...
	I1025 10:48:55.213974  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:48:55.243098  407575 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:48:55.243138  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:48:55.312763  407575 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:48:55.312786  407575 logs.go:123] Gathering logs for kube-apiserver [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f] ...
	I1025 10:48:55.312799  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:55.345729  407575 logs.go:123] Gathering logs for kube-scheduler [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015] ...
	I1025 10:48:55.345763  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:55.407751  407575 logs.go:123] Gathering logs for kube-controller-manager [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044] ...
	I1025 10:48:55.407786  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:55.435382  407575 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:48:55.435413  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1025 10:48:57.955184  420177 pod_ready.go:104] pod "coredns-66bc5c9577-hxv7f" is not "Ready", error: <nil>
	I1025 10:48:59.455843  420177 pod_ready.go:94] pod "coredns-66bc5c9577-hxv7f" is "Ready"
	I1025 10:48:59.455883  420177 pod_ready.go:86] duration metric: took 6.006189531s for pod "coredns-66bc5c9577-hxv7f" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:48:59.458761  420177 pod_ready.go:83] waiting for pod "etcd-pause-494622" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:49:01.466458  420177 pod_ready.go:104] pod "etcd-pause-494622" is not "Ready", error: <nil>
	I1025 10:48:57.995639  407575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:48:57.996079  407575 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 10:48:57.996127  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:48:57.996186  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:48:58.026719  407575 cri.go:89] found id: "9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:58.026744  407575 cri.go:89] found id: ""
	I1025 10:48:58.026754  407575 logs.go:282] 1 containers: [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f]
	I1025 10:48:58.026816  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:58.030693  407575 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:48:58.030770  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:48:58.063753  407575 cri.go:89] found id: ""
	I1025 10:48:58.063778  407575 logs.go:282] 0 containers: []
	W1025 10:48:58.063787  407575 logs.go:284] No container was found matching "etcd"
	I1025 10:48:58.063794  407575 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:48:58.063854  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:48:58.091620  407575 cri.go:89] found id: ""
	I1025 10:48:58.091699  407575 logs.go:282] 0 containers: []
	W1025 10:48:58.091715  407575 logs.go:284] No container was found matching "coredns"
	I1025 10:48:58.091723  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:48:58.091797  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:48:58.119102  407575 cri.go:89] found id: "ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:58.119126  407575 cri.go:89] found id: ""
	I1025 10:48:58.119134  407575 logs.go:282] 1 containers: [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015]
	I1025 10:48:58.119193  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:58.122974  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:48:58.123056  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:48:58.150655  407575 cri.go:89] found id: ""
	I1025 10:48:58.150681  407575 logs.go:282] 0 containers: []
	W1025 10:48:58.150690  407575 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:48:58.150698  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:48:58.150759  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:48:58.179348  407575 cri.go:89] found id: "1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:58.179372  407575 cri.go:89] found id: ""
	I1025 10:48:58.179380  407575 logs.go:282] 1 containers: [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044]
	I1025 10:48:58.179444  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:48:58.183368  407575 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:48:58.183446  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:48:58.210274  407575 cri.go:89] found id: ""
	I1025 10:48:58.210302  407575 logs.go:282] 0 containers: []
	W1025 10:48:58.210313  407575 logs.go:284] No container was found matching "kindnet"
	I1025 10:48:58.210321  407575 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:48:58.210383  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:48:58.238525  407575 cri.go:89] found id: ""
	I1025 10:48:58.238547  407575 logs.go:282] 0 containers: []
	W1025 10:48:58.238556  407575 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:48:58.238565  407575 logs.go:123] Gathering logs for dmesg ...
	I1025 10:48:58.238577  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:48:58.257827  407575 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:48:58.257933  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:48:58.324852  407575 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:48:58.324871  407575 logs.go:123] Gathering logs for kube-apiserver [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f] ...
	I1025 10:48:58.324883  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:48:58.359826  407575 logs.go:123] Gathering logs for kube-scheduler [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015] ...
	I1025 10:48:58.359863  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:48:58.420783  407575 logs.go:123] Gathering logs for kube-controller-manager [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044] ...
	I1025 10:48:58.420820  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:48:58.450071  407575 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:48:58.450101  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:48:58.507255  407575 logs.go:123] Gathering logs for container status ...
	I1025 10:48:58.507300  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:48:58.538953  407575 logs.go:123] Gathering logs for kubelet ...
	I1025 10:48:58.538983  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:49:01.157245  407575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:49:01.157745  407575 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 10:49:01.157808  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:49:01.157883  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:49:01.185737  407575 cri.go:89] found id: "9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:49:01.185759  407575 cri.go:89] found id: ""
	I1025 10:49:01.185768  407575 logs.go:282] 1 containers: [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f]
	I1025 10:49:01.185824  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:49:01.189571  407575 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:49:01.189667  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:49:01.218681  407575 cri.go:89] found id: ""
	I1025 10:49:01.218714  407575 logs.go:282] 0 containers: []
	W1025 10:49:01.218723  407575 logs.go:284] No container was found matching "etcd"
	I1025 10:49:01.218730  407575 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:49:01.218792  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:49:01.251461  407575 cri.go:89] found id: ""
	I1025 10:49:01.251486  407575 logs.go:282] 0 containers: []
	W1025 10:49:01.251494  407575 logs.go:284] No container was found matching "coredns"
	I1025 10:49:01.251501  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:49:01.251561  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:49:01.278770  407575 cri.go:89] found id: "ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:49:01.278793  407575 cri.go:89] found id: ""
	I1025 10:49:01.278801  407575 logs.go:282] 1 containers: [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015]
	I1025 10:49:01.278860  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:49:01.282540  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:49:01.282626  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:49:01.313801  407575 cri.go:89] found id: ""
	I1025 10:49:01.313825  407575 logs.go:282] 0 containers: []
	W1025 10:49:01.313834  407575 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:49:01.313841  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:49:01.313905  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:49:01.340546  407575 cri.go:89] found id: "1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:49:01.340618  407575 cri.go:89] found id: ""
	I1025 10:49:01.340658  407575 logs.go:282] 1 containers: [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044]
	I1025 10:49:01.340760  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:49:01.344428  407575 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:49:01.344566  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:49:01.370669  407575 cri.go:89] found id: ""
	I1025 10:49:01.370695  407575 logs.go:282] 0 containers: []
	W1025 10:49:01.370705  407575 logs.go:284] No container was found matching "kindnet"
	I1025 10:49:01.370711  407575 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:49:01.370771  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:49:01.397940  407575 cri.go:89] found id: ""
	I1025 10:49:01.397965  407575 logs.go:282] 0 containers: []
	W1025 10:49:01.397974  407575 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:49:01.398006  407575 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:49:01.398021  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:49:01.455651  407575 logs.go:123] Gathering logs for container status ...
	I1025 10:49:01.455692  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:49:01.492167  407575 logs.go:123] Gathering logs for kubelet ...
	I1025 10:49:01.492194  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:49:01.609165  407575 logs.go:123] Gathering logs for dmesg ...
	I1025 10:49:01.609205  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:49:01.627336  407575 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:49:01.627369  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:49:01.697962  407575 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:49:01.698013  407575 logs.go:123] Gathering logs for kube-apiserver [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f] ...
	I1025 10:49:01.698045  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:49:01.738767  407575 logs.go:123] Gathering logs for kube-scheduler [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015] ...
	I1025 10:49:01.738843  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:49:01.806042  407575 logs.go:123] Gathering logs for kube-controller-manager [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044] ...
	I1025 10:49:01.806080  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	W1025 10:49:03.964482  420177 pod_ready.go:104] pod "etcd-pause-494622" is not "Ready", error: <nil>
	I1025 10:49:06.466193  420177 pod_ready.go:94] pod "etcd-pause-494622" is "Ready"
	I1025 10:49:06.466231  420177 pod_ready.go:86] duration metric: took 7.007442456s for pod "etcd-pause-494622" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:49:06.469267  420177 pod_ready.go:83] waiting for pod "kube-apiserver-pause-494622" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:49:06.475692  420177 pod_ready.go:94] pod "kube-apiserver-pause-494622" is "Ready"
	I1025 10:49:06.475770  420177 pod_ready.go:86] duration metric: took 6.469395ms for pod "kube-apiserver-pause-494622" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:49:06.479235  420177 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-494622" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:49:06.485486  420177 pod_ready.go:94] pod "kube-controller-manager-pause-494622" is "Ready"
	I1025 10:49:06.485561  420177 pod_ready.go:86] duration metric: took 6.252343ms for pod "kube-controller-manager-pause-494622" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:49:06.488545  420177 pod_ready.go:83] waiting for pod "kube-proxy-tmr4x" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:49:06.664240  420177 pod_ready.go:94] pod "kube-proxy-tmr4x" is "Ready"
	I1025 10:49:06.664345  420177 pod_ready.go:86] duration metric: took 175.726386ms for pod "kube-proxy-tmr4x" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:49:06.867156  420177 pod_ready.go:83] waiting for pod "kube-scheduler-pause-494622" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:49:07.265197  420177 pod_ready.go:94] pod "kube-scheduler-pause-494622" is "Ready"
	I1025 10:49:07.265295  420177 pod_ready.go:86] duration metric: took 398.011707ms for pod "kube-scheduler-pause-494622" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:49:07.265336  420177 pod_ready.go:40] duration metric: took 13.819796339s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:49:07.347987  420177 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:49:07.351055  420177 out.go:179] * Done! kubectl is now configured to use "pause-494622" cluster and "default" namespace by default
	I1025 10:49:04.332586  407575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:49:04.333021  407575 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1025 10:49:04.333065  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:49:04.333131  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:49:04.364849  407575 cri.go:89] found id: "9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:49:04.364874  407575 cri.go:89] found id: ""
	I1025 10:49:04.364886  407575 logs.go:282] 1 containers: [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f]
	I1025 10:49:04.364954  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:49:04.368735  407575 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:49:04.368807  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:49:04.394745  407575 cri.go:89] found id: ""
	I1025 10:49:04.394778  407575 logs.go:282] 0 containers: []
	W1025 10:49:04.394789  407575 logs.go:284] No container was found matching "etcd"
	I1025 10:49:04.394796  407575 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:49:04.394857  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:49:04.421884  407575 cri.go:89] found id: ""
	I1025 10:49:04.421908  407575 logs.go:282] 0 containers: []
	W1025 10:49:04.421917  407575 logs.go:284] No container was found matching "coredns"
	I1025 10:49:04.421923  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:49:04.422059  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:49:04.451180  407575 cri.go:89] found id: "ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:49:04.451203  407575 cri.go:89] found id: ""
	I1025 10:49:04.451212  407575 logs.go:282] 1 containers: [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015]
	I1025 10:49:04.451277  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:49:04.455100  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:49:04.455196  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:49:04.485451  407575 cri.go:89] found id: ""
	I1025 10:49:04.485486  407575 logs.go:282] 0 containers: []
	W1025 10:49:04.485495  407575 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:49:04.485502  407575 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:49:04.485572  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:49:04.512939  407575 cri.go:89] found id: "1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:49:04.512963  407575 cri.go:89] found id: ""
	I1025 10:49:04.512971  407575 logs.go:282] 1 containers: [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044]
	I1025 10:49:04.513035  407575 ssh_runner.go:195] Run: which crictl
	I1025 10:49:04.517803  407575 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:49:04.517875  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:49:04.543990  407575 cri.go:89] found id: ""
	I1025 10:49:04.544017  407575 logs.go:282] 0 containers: []
	W1025 10:49:04.544026  407575 logs.go:284] No container was found matching "kindnet"
	I1025 10:49:04.544032  407575 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:49:04.544141  407575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:49:04.580312  407575 cri.go:89] found id: ""
	I1025 10:49:04.580378  407575 logs.go:282] 0 containers: []
	W1025 10:49:04.580401  407575 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:49:04.580430  407575 logs.go:123] Gathering logs for dmesg ...
	I1025 10:49:04.580469  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:49:04.601747  407575 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:49:04.601833  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:49:04.673642  407575 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:49:04.673665  407575 logs.go:123] Gathering logs for kube-apiserver [9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f] ...
	I1025 10:49:04.673678  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9a520084c361e4fc3c0b79bb5011779d943d354a2d47f209900345c63762928f"
	I1025 10:49:04.716162  407575 logs.go:123] Gathering logs for kube-scheduler [ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015] ...
	I1025 10:49:04.716198  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ae8c8b5ac60e6782cfb5a9acb46b12c8dd58f208adddb90d6236f5cf6a8e7015"
	I1025 10:49:04.773950  407575 logs.go:123] Gathering logs for kube-controller-manager [1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044] ...
	I1025 10:49:04.774003  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1b7e22150ed590c596c41d56fc09808ddc116c557fe6925955d05a364cbf3044"
	I1025 10:49:04.804150  407575 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:49:04.804178  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:49:04.863060  407575 logs.go:123] Gathering logs for container status ...
	I1025 10:49:04.863095  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:49:04.896130  407575 logs.go:123] Gathering logs for kubelet ...
	I1025 10:49:04.896160  407575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:49:07.516323  407575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	
	
	==> CRI-O <==
	Oct 25 10:48:46 pause-494622 crio[2058]: time="2025-10-25T10:48:46.944535248Z" level=info msg="Starting container: 0411316bf38375595d1010d1a674f0717b0b515e8d8abbbff9e7ea89d8444814" id=87e989b6-a270-48cb-a958-c072d9f0c6e7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:48:46 pause-494622 crio[2058]: time="2025-10-25T10:48:46.969703128Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:48:46 pause-494622 crio[2058]: time="2025-10-25T10:48:46.970650099Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:48:46 pause-494622 crio[2058]: time="2025-10-25T10:48:46.984018539Z" level=info msg="Started container" PID=2302 containerID=0411316bf38375595d1010d1a674f0717b0b515e8d8abbbff9e7ea89d8444814 description=kube-system/etcd-pause-494622/etcd id=87e989b6-a270-48cb-a958-c072d9f0c6e7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c497ffe2999dc56acac9591d86db75448418124a24ba02bf69912bd8cc462ff0
	Oct 25 10:48:47 pause-494622 crio[2058]: time="2025-10-25T10:48:47.031492635Z" level=info msg="Created container 96c83536eb5174d663ace1cf0fadd0dacdd66f200c39a0bdd9cc3de480273b9a: kube-system/kindnet-zprkn/kindnet-cni" id=361aa80f-dea3-4e2d-9315-b0971a293f74 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:48:47 pause-494622 crio[2058]: time="2025-10-25T10:48:47.035430781Z" level=info msg="Starting container: 96c83536eb5174d663ace1cf0fadd0dacdd66f200c39a0bdd9cc3de480273b9a" id=1c10d891-fe5e-4561-b881-cf06573cdb89 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:48:47 pause-494622 crio[2058]: time="2025-10-25T10:48:47.037225736Z" level=info msg="Started container" PID=2322 containerID=96c83536eb5174d663ace1cf0fadd0dacdd66f200c39a0bdd9cc3de480273b9a description=kube-system/kindnet-zprkn/kindnet-cni id=1c10d891-fe5e-4561-b881-cf06573cdb89 name=/runtime.v1.RuntimeService/StartContainer sandboxID=55a4dd7e7fa8a8e138de97901e9220c6c675e7a6839b9f8756bc73266af3663d
	Oct 25 10:48:47 pause-494622 crio[2058]: time="2025-10-25T10:48:47.608650609Z" level=info msg="Created container b30e317be3539ac55cad517b2c4bbfd1d83ba79be6b363594a6726c56ecba536: kube-system/kube-proxy-tmr4x/kube-proxy" id=d144e588-c6c2-4c07-9f42-5e207442f65f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:48:47 pause-494622 crio[2058]: time="2025-10-25T10:48:47.61236477Z" level=info msg="Starting container: b30e317be3539ac55cad517b2c4bbfd1d83ba79be6b363594a6726c56ecba536" id=e9c4b014-b0ae-4a74-8f19-2d161417d1ba name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:48:47 pause-494622 crio[2058]: time="2025-10-25T10:48:47.616760792Z" level=info msg="Started container" PID=2332 containerID=b30e317be3539ac55cad517b2c4bbfd1d83ba79be6b363594a6726c56ecba536 description=kube-system/kube-proxy-tmr4x/kube-proxy id=e9c4b014-b0ae-4a74-8f19-2d161417d1ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=8b5263aacb14681c4f70dd0233cb26b387ce1775c639c83cfe40ea8df4687304
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.463967774Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.468325133Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.468366897Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.468389962Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.472793475Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.472835486Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.472863991Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.480489664Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.480539781Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.480567252Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.490050912Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.490222442Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.490301318Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.496288157Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:48:57 pause-494622 crio[2058]: time="2025-10-25T10:48:57.496329273Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	b30e317be3539       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   26 seconds ago       Running             kube-proxy                1                   8b5263aacb146       kube-proxy-tmr4x                       kube-system
	96c83536eb517       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   27 seconds ago       Running             kindnet-cni               1                   55a4dd7e7fa8a       kindnet-zprkn                          kube-system
	0411316bf3837       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   27 seconds ago       Running             etcd                      1                   c497ffe2999dc       etcd-pause-494622                      kube-system
	336fcce7f177e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   27 seconds ago       Running             kube-scheduler            1                   f33ce0d6881be       kube-scheduler-pause-494622            kube-system
	6a52960107a32       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   27 seconds ago       Running             kube-apiserver            1                   3b7e79d0e549e       kube-apiserver-pause-494622            kube-system
	a2575b358a484       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   27 seconds ago       Running             kube-controller-manager   1                   8d5a6c5509aaa       kube-controller-manager-pause-494622   kube-system
	9dd9f6a058389       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   27 seconds ago       Running             coredns                   1                   71ee8a1159ac4       coredns-66bc5c9577-hxv7f               kube-system
	0dacb3499bb49       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   39 seconds ago       Exited              coredns                   0                   71ee8a1159ac4       coredns-66bc5c9577-hxv7f               kube-system
	0a2ac9c532567       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   55a4dd7e7fa8a       kindnet-zprkn                          kube-system
	3e2ff0d6a6cab       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   8b5263aacb146       kube-proxy-tmr4x                       kube-system
	5a391da839348       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   3b7e79d0e549e       kube-apiserver-pause-494622            kube-system
	4f17ef8ba1aa5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   c497ffe2999dc       etcd-pause-494622                      kube-system
	ee7dbc55c9511       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   8d5a6c5509aaa       kube-controller-manager-pause-494622   kube-system
	56698e4599135       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   f33ce0d6881be       kube-scheduler-pause-494622            kube-system
	
	
	==> coredns [0dacb3499bb498eb60afdc5550e70098c64ba1e92df1f33f6f5990e014b49766] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36432 - 60643 "HINFO IN 4746436151050476060.9097902981941713514. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.04248122s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9dd9f6a0583890e7ee49e45dee555a894f7bebf9e5043ef4e4d76611b6528f01] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50987 - 28941 "HINFO IN 7474770809676891525.1809044207862229286. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032983536s
	
	
	==> describe nodes <==
	Name:               pause-494622
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-494622
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=pause-494622
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_47_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:47:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-494622
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:49:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:48:34 +0000   Sat, 25 Oct 2025 10:47:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:48:34 +0000   Sat, 25 Oct 2025 10:47:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:48:34 +0000   Sat, 25 Oct 2025 10:47:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:48:34 +0000   Sat, 25 Oct 2025 10:48:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-494622
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                f2b31047-a0f3-404e-9b65-adb974dd9b26
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-hxv7f                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     82s
	  kube-system                 etcd-pause-494622                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         87s
	  kube-system                 kindnet-zprkn                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      82s
	  kube-system                 kube-apiserver-pause-494622             250m (12%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-controller-manager-pause-494622    200m (10%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-proxy-tmr4x                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-scheduler-pause-494622             100m (5%)     0 (0%)      0 (0%)           0 (0%)         87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 80s                kube-proxy       
	  Normal   Starting                 21s                kube-proxy       
	  Normal   NodeHasSufficientPID     96s (x8 over 96s)  kubelet          Node pause-494622 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 96s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  96s (x8 over 96s)  kubelet          Node pause-494622 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    96s (x8 over 96s)  kubelet          Node pause-494622 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 96s                kubelet          Starting kubelet.
	  Normal   Starting                 87s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 87s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  87s                kubelet          Node pause-494622 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    87s                kubelet          Node pause-494622 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     87s                kubelet          Node pause-494622 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           83s                node-controller  Node pause-494622 event: Registered Node pause-494622 in Controller
	  Normal   NodeReady                40s                kubelet          Node pause-494622 status is now: NodeReady
	  Normal   RegisteredNode           20s                node-controller  Node pause-494622 event: Registered Node pause-494622 in Controller
	
	
	==> dmesg <==
	[  +4.737500] overlayfs: idmapped layers are currently not supported
	[Oct25 10:14] overlayfs: idmapped layers are currently not supported
	[Oct25 10:22] overlayfs: idmapped layers are currently not supported
	[  +3.234784] overlayfs: idmapped layers are currently not supported
	[Oct25 10:23] overlayfs: idmapped layers are currently not supported
	[Oct25 10:24] overlayfs: idmapped layers are currently not supported
	[Oct25 10:25] overlayfs: idmapped layers are currently not supported
	[Oct25 10:30] overlayfs: idmapped layers are currently not supported
	[Oct25 10:31] overlayfs: idmapped layers are currently not supported
	[Oct25 10:32] overlayfs: idmapped layers are currently not supported
	[Oct25 10:33] overlayfs: idmapped layers are currently not supported
	[Oct25 10:34] overlayfs: idmapped layers are currently not supported
	[Oct25 10:36] overlayfs: idmapped layers are currently not supported
	[ +23.146409] overlayfs: idmapped layers are currently not supported
	[ +12.090113] overlayfs: idmapped layers are currently not supported
	[Oct25 10:37] overlayfs: idmapped layers are currently not supported
	[ +24.992751] overlayfs: idmapped layers are currently not supported
	[Oct25 10:38] overlayfs: idmapped layers are currently not supported
	[Oct25 10:39] overlayfs: idmapped layers are currently not supported
	[Oct25 10:40] overlayfs: idmapped layers are currently not supported
	[Oct25 10:41] overlayfs: idmapped layers are currently not supported
	[Oct25 10:43] overlayfs: idmapped layers are currently not supported
	[Oct25 10:45] overlayfs: idmapped layers are currently not supported
	[ +32.163680] overlayfs: idmapped layers are currently not supported
	[Oct25 10:47] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0411316bf38375595d1010d1a674f0717b0b515e8d8abbbff9e7ea89d8444814] <==
	{"level":"warn","ts":"2025-10-25T10:48:50.452917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.496203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.503168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.522940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.535063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.550433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.571824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.584145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.601562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.636153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.650208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.672089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.690679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.716063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.731338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.744644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.764978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.782298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.797500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.818055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.839032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.872210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.886737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.909774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:48:50.984139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56424","server-name":"","error":"EOF"}
	
	
	==> etcd [4f17ef8ba1aa56544d98deddadc6648233f1aa7f176fb6f9cb061a02e556af0f] <==
	{"level":"warn","ts":"2025-10-25T10:47:43.299447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:47:43.336996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:47:43.394329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:47:43.419146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:47:43.442542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:47:43.473480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:47:43.600626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50098","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T10:48:38.838565Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-25T10:48:38.838622Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-494622","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-25T10:48:38.838769Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T10:48:38.984503Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T10:48:38.984596Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T10:48:38.984641Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-10-25T10:48:38.984671Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-25T10:48:38.984757Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-25T10:48:38.984809Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T10:48:38.984831Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-25T10:48:38.984841Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-25T10:48:38.984760Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T10:48:38.984854Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-25T10:48:38.984860Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T10:48:38.987995Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-25T10:48:38.988070Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T10:48:38.988107Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-25T10:48:38.988125Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-494622","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 10:49:14 up  2:31,  0 user,  load average: 2.00, 2.61, 2.33
	Linux pause-494622 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0a2ac9c53256707ef6dd02317248b4d542d804a6bc9fa4ffe7fcf73c2e0e74ba] <==
	I1025 10:47:53.320081       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:47:53.320424       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:47:53.320572       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:47:53.320613       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:47:53.320653       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:47:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:47:53.522859       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:47:53.522888       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:47:53.522896       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:47:53.523278       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:48:23.522776       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 10:48:23.522895       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 10:48:23.523890       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 10:48:23.523940       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1025 10:48:24.723290       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:48:24.723326       1 metrics.go:72] Registering metrics
	I1025 10:48:24.723389       1 controller.go:711] "Syncing nftables rules"
	I1025 10:48:33.530086       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:48:33.530144       1 main.go:301] handling current node
	
	
	==> kindnet [96c83536eb5174d663ace1cf0fadd0dacdd66f200c39a0bdd9cc3de480273b9a] <==
	I1025 10:48:47.186968       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:48:47.187466       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:48:47.187696       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:48:47.187736       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:48:47.187788       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:48:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:48:47.466058       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:48:47.466159       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:48:47.466200       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:48:47.470977       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 10:48:52.471051       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:48:52.471215       1 metrics.go:72] Registering metrics
	I1025 10:48:52.471373       1 controller.go:711] "Syncing nftables rules"
	I1025 10:48:57.463551       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:48:57.463678       1 main.go:301] handling current node
	I1025 10:49:07.463026       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:49:07.463105       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5a391da839348564e6c59f05bd1af2867b2ba66f17ea3ba8731f53c762dce341] <==
	W1025 10:48:38.860179       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.860226       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.860273       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.860321       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.860443       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.860572       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.860626       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.860670       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.860723       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.860763       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.860811       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.860857       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.860896       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.861438       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.861531       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.861598       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.861638       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.861683       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.861689       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.861736       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.861781       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.861813       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.861837       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.861859       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:48:38.861889       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [6a52960107a32d9d63c9a726cde40d6bc306416bd0198608ded2c7804daad2a9] <==
	I1025 10:48:52.434678       1 policy_source.go:240] refreshing policies
	I1025 10:48:52.435124       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 10:48:52.446949       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:48:52.449666       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:48:52.497677       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 10:48:52.497708       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 10:48:52.497823       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:48:52.503577       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 10:48:52.503678       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1025 10:48:52.504529       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 10:48:52.504689       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 10:48:52.504918       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 10:48:52.505234       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 10:48:52.505283       1 aggregator.go:171] initial CRD sync complete...
	I1025 10:48:52.505292       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 10:48:52.505297       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:48:52.505302       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:48:52.505426       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1025 10:48:52.536993       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 10:48:52.913699       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:48:53.462318       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:48:54.970899       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:48:55.068719       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:48:55.168885       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:48:55.224824       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [a2575b358a4844d89fec42a8040e731bf10578ac0841857c5d57c9f3d436492e] <==
	I1025 10:48:54.846793       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 10:48:54.853118       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 10:48:54.853242       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 10:48:54.853308       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 10:48:54.853365       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 10:48:54.853394       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 10:48:54.857138       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:48:54.857161       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:48:54.857168       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:48:54.862114       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:48:54.862179       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:48:54.862217       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:48:54.862242       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 10:48:54.862972       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 10:48:54.863061       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:48:54.863103       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 10:48:54.865652       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 10:48:54.870236       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 10:48:54.870342       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 10:48:54.870370       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 10:48:54.870419       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 10:48:54.870459       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 10:48:54.871564       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:48:54.877251       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 10:48:54.883332       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	
	
	==> kube-controller-manager [ee7dbc55c95114fc27b76a23f146b7b3cdf19a29f316645a7438a38ba79d5fca] <==
	I1025 10:47:51.502646       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:47:51.503666       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 10:47:51.503747       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 10:47:51.503804       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 10:47:51.503872       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 10:47:51.503948       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-494622"
	I1025 10:47:51.503991       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 10:47:51.504027       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 10:47:51.504054       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 10:47:51.504278       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 10:47:51.504793       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 10:47:51.504932       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 10:47:51.506843       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 10:47:51.507426       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:47:51.509070       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:47:51.509518       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:47:51.509564       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 10:47:51.509686       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 10:47:51.509848       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:47:51.509876       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 10:47:51.511691       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 10:47:51.511777       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 10:47:51.514423       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 10:47:51.525520       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 10:48:36.513718       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3e2ff0d6a6cab5c143e5ccbf87b8ae6a4d27061da41107c8817afeec41eb6940] <==
	I1025 10:47:53.312175       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:47:53.399661       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:47:53.500632       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:47:53.500675       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 10:47:53.500747       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:47:53.520739       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:47:53.520863       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:47:53.614768       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:47:53.615139       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:47:53.615387       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:47:53.619274       1 config.go:200] "Starting service config controller"
	I1025 10:47:53.619382       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:47:53.619706       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:47:53.619713       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:47:53.619740       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:47:53.619745       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:47:53.624774       1 config.go:309] "Starting node config controller"
	I1025 10:47:53.624858       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:47:53.624867       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:47:53.720754       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:47:53.720755       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:47:53.720860       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [b30e317be3539ac55cad517b2c4bbfd1d83ba79be6b363594a6726c56ecba536] <==
	I1025 10:48:49.066682       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:48:50.005812       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:48:52.506618       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:48:52.514185       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 10:48:52.526241       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:48:52.718143       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:48:52.718218       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:48:52.790097       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:48:52.790783       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:48:52.791029       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:48:52.792346       1 config.go:200] "Starting service config controller"
	I1025 10:48:52.792818       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:48:52.792897       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:48:52.792932       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:48:52.792983       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:48:52.793024       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:48:52.794187       1 config.go:309] "Starting node config controller"
	I1025 10:48:52.794253       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:48:52.794285       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:48:52.893661       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:48:52.893732       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 10:48:52.893820       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [336fcce7f177ee63099c95f463857f65a6c8674b4cae330456af35a66d1e5927] <==
	I1025 10:48:50.322597       1 serving.go:386] Generated self-signed cert in-memory
	I1025 10:48:53.142320       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:48:53.142353       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:48:53.147558       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1025 10:48:53.147599       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1025 10:48:53.147640       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:48:53.147647       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:48:53.147661       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:48:53.147673       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:48:53.148735       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:48:53.148822       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:48:53.254245       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:48:53.254393       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1025 10:48:53.254516       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [56698e4599135d8d0d3a8b15f20fb0fcbcf302ce721ba8c99956c5c54be1673d] <==
	E1025 10:47:44.666554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:47:44.666677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:47:44.675442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 10:47:44.675642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 10:47:45.481181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 10:47:45.501445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 10:47:45.563856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:47:45.593162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:47:45.614985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 10:47:45.674449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 10:47:45.700484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 10:47:45.709079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 10:47:45.712378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:47:45.764205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 10:47:45.783631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 10:47:45.883294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:47:45.910837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:47:45.959477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1025 10:47:48.215009       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:48:38.842045       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1025 10:48:38.842166       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1025 10:48:38.842191       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1025 10:48:38.842222       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:48:38.842376       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1025 10:48:38.842392       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 25 10:48:46 pause-494622 kubelet[1310]: E1025 10:48:46.848650    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-494622\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="870f9d95e8db34b2e3bf140101c93265" pod="kube-system/kube-controller-manager-pause-494622"
	Oct 25 10:48:46 pause-494622 kubelet[1310]: E1025 10:48:46.850418    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-494622\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4c4ce95ad339f04df5a76cf3062661e9" pod="kube-system/kube-scheduler-pause-494622"
	Oct 25 10:48:46 pause-494622 kubelet[1310]: E1025 10:48:46.850725    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-zprkn\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="5aa10493-5fd8-4bf9-b4d7-5ca08b07f0aa" pod="kube-system/kindnet-zprkn"
	Oct 25 10:48:46 pause-494622 kubelet[1310]: E1025 10:48:46.850987    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-hxv7f\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4ede21c9-566e-4bba-881f-5aa690ed4934" pod="kube-system/coredns-66bc5c9577-hxv7f"
	Oct 25 10:48:46 pause-494622 kubelet[1310]: I1025 10:48:46.870677    1310 scope.go:117] "RemoveContainer" containerID="3e2ff0d6a6cab5c143e5ccbf87b8ae6a4d27061da41107c8817afeec41eb6940"
	Oct 25 10:48:46 pause-494622 kubelet[1310]: E1025 10:48:46.871360    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-hxv7f\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4ede21c9-566e-4bba-881f-5aa690ed4934" pod="kube-system/coredns-66bc5c9577-hxv7f"
	Oct 25 10:48:46 pause-494622 kubelet[1310]: E1025 10:48:46.871663    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-494622\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4542b5a9c28d6dd4601ffd75d5f5e92b" pod="kube-system/etcd-pause-494622"
	Oct 25 10:48:46 pause-494622 kubelet[1310]: E1025 10:48:46.875131    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-494622\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="cd24fe48314e95911e7153f5e59b89df" pod="kube-system/kube-apiserver-pause-494622"
	Oct 25 10:48:46 pause-494622 kubelet[1310]: E1025 10:48:46.875428    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-494622\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="870f9d95e8db34b2e3bf140101c93265" pod="kube-system/kube-controller-manager-pause-494622"
	Oct 25 10:48:46 pause-494622 kubelet[1310]: E1025 10:48:46.875680    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-494622\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="4c4ce95ad339f04df5a76cf3062661e9" pod="kube-system/kube-scheduler-pause-494622"
	Oct 25 10:48:46 pause-494622 kubelet[1310]: E1025 10:48:46.875958    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tmr4x\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b0951588-0d5e-4c4d-a26e-32fe980890b4" pod="kube-system/kube-proxy-tmr4x"
	Oct 25 10:48:46 pause-494622 kubelet[1310]: E1025 10:48:46.876251    1310 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-zprkn\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="5aa10493-5fd8-4bf9-b4d7-5ca08b07f0aa" pod="kube-system/kindnet-zprkn"
	Oct 25 10:48:52 pause-494622 kubelet[1310]: E1025 10:48:52.307700    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-494622\" is forbidden: User \"system:node:pause-494622\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-494622' and this object" podUID="4542b5a9c28d6dd4601ffd75d5f5e92b" pod="kube-system/etcd-pause-494622"
	Oct 25 10:48:52 pause-494622 kubelet[1310]: E1025 10:48:52.308439    1310 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-494622\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-494622' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 25 10:48:52 pause-494622 kubelet[1310]: E1025 10:48:52.308559    1310 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-494622\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-494622' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 25 10:48:52 pause-494622 kubelet[1310]: E1025 10:48:52.308633    1310 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-494622\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-494622' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 25 10:48:52 pause-494622 kubelet[1310]: E1025 10:48:52.345182    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-494622\" is forbidden: User \"system:node:pause-494622\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-494622' and this object" podUID="cd24fe48314e95911e7153f5e59b89df" pod="kube-system/kube-apiserver-pause-494622"
	Oct 25 10:48:52 pause-494622 kubelet[1310]: E1025 10:48:52.376412    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-494622\" is forbidden: User \"system:node:pause-494622\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-494622' and this object" podUID="870f9d95e8db34b2e3bf140101c93265" pod="kube-system/kube-controller-manager-pause-494622"
	Oct 25 10:48:52 pause-494622 kubelet[1310]: E1025 10:48:52.392254    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-494622\" is forbidden: User \"system:node:pause-494622\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-494622' and this object" podUID="4c4ce95ad339f04df5a76cf3062661e9" pod="kube-system/kube-scheduler-pause-494622"
	Oct 25 10:48:52 pause-494622 kubelet[1310]: E1025 10:48:52.406854    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-tmr4x\" is forbidden: User \"system:node:pause-494622\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-494622' and this object" podUID="b0951588-0d5e-4c4d-a26e-32fe980890b4" pod="kube-system/kube-proxy-tmr4x"
	Oct 25 10:48:52 pause-494622 kubelet[1310]: E1025 10:48:52.422986    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-zprkn\" is forbidden: User \"system:node:pause-494622\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-494622' and this object" podUID="5aa10493-5fd8-4bf9-b4d7-5ca08b07f0aa" pod="kube-system/kindnet-zprkn"
	Oct 25 10:49:07 pause-494622 kubelet[1310]: W1025 10:49:07.800956    1310 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 25 10:49:07 pause-494622 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:49:08 pause-494622 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:49:08 pause-494622 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-494622 -n pause-494622
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-494622 -n pause-494622: exit status 2 (386.497721ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-494622 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-031983 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-031983 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (281.450742ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:52:58Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-031983 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-031983 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-031983 describe deploy/metrics-server -n kube-system: exit status 1 (83.846387ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-031983 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-031983
helpers_test.go:243: (dbg) docker inspect old-k8s-version-031983:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c9e4fcd1d86890ade274f287b767861c437fe06d2f626866732741156c2dea19",
	        "Created": "2025-10-25T10:51:50.262019678Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 437933,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:51:50.340301581Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/c9e4fcd1d86890ade274f287b767861c437fe06d2f626866732741156c2dea19/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c9e4fcd1d86890ade274f287b767861c437fe06d2f626866732741156c2dea19/hostname",
	        "HostsPath": "/var/lib/docker/containers/c9e4fcd1d86890ade274f287b767861c437fe06d2f626866732741156c2dea19/hosts",
	        "LogPath": "/var/lib/docker/containers/c9e4fcd1d86890ade274f287b767861c437fe06d2f626866732741156c2dea19/c9e4fcd1d86890ade274f287b767861c437fe06d2f626866732741156c2dea19-json.log",
	        "Name": "/old-k8s-version-031983",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-031983:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-031983",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c9e4fcd1d86890ade274f287b767861c437fe06d2f626866732741156c2dea19",
	                "LowerDir": "/var/lib/docker/overlay2/f83171e8b997df441f44209753365f0b1cf2bf8af3f3f60c6899baef6933b87f-init/diff:/var/lib/docker/overlay2/a746402fafa86965bd9394d930bb96ec16982037f9f5e7f4f1de6d09576ff851/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f83171e8b997df441f44209753365f0b1cf2bf8af3f3f60c6899baef6933b87f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f83171e8b997df441f44209753365f0b1cf2bf8af3f3f60c6899baef6933b87f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f83171e8b997df441f44209753365f0b1cf2bf8af3f3f60c6899baef6933b87f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-031983",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-031983/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-031983",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-031983",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-031983",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "450d6cd8219f62399dd02dfcba993a00e2248b689aa9230ff57fbc13af00ec8f",
	            "SandboxKey": "/var/run/docker/netns/450d6cd8219f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33408"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33409"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33412"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33410"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33411"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-031983": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:e3:f8:57:b3:6c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f134955a6205418e30c262ff57f17637fbd69f7510dbb06a7800f5313ba135a3",
	                    "EndpointID": "ad0e8fee78b0f25378b81d2cd0705e438c1b88cb085c4c14d4bc4e77648d3ac8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-031983",
	                        "c9e4fcd1d868"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-031983 -n old-k8s-version-031983
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-031983 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-031983 logs -n 25: (1.521234907s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-759329 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ ssh     │ -p cilium-759329 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ ssh     │ -p cilium-759329 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ ssh     │ -p cilium-759329 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ ssh     │ -p cilium-759329 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ ssh     │ -p cilium-759329 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ ssh     │ -p cilium-759329 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ ssh     │ -p cilium-759329 sudo containerd config dump                                                                                                                                                                                                  │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ ssh     │ -p cilium-759329 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ ssh     │ -p cilium-759329 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ ssh     │ -p cilium-759329 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ ssh     │ -p cilium-759329 sudo crio config                                                                                                                                                                                                             │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ delete  │ -p cilium-759329                                                                                                                                                                                                                              │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:50 UTC │
	│ start   │ -p force-systemd-env-623432 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-623432  │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:50 UTC │
	│ start   │ -p kubernetes-upgrade-291330 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-291330 │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ start   │ -p kubernetes-upgrade-291330 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-291330 │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:50 UTC │
	│ delete  │ -p kubernetes-upgrade-291330                                                                                                                                                                                                                  │ kubernetes-upgrade-291330 │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:50 UTC │
	│ start   │ -p cert-expiration-736062 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-736062    │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:51 UTC │
	│ delete  │ -p force-systemd-env-623432                                                                                                                                                                                                                   │ force-systemd-env-623432  │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:50 UTC │
	│ start   │ -p cert-options-771620 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-771620       │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:51 UTC │
	│ ssh     │ cert-options-771620 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-771620       │ jenkins │ v1.37.0 │ 25 Oct 25 10:51 UTC │ 25 Oct 25 10:51 UTC │
	│ ssh     │ -p cert-options-771620 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-771620       │ jenkins │ v1.37.0 │ 25 Oct 25 10:51 UTC │ 25 Oct 25 10:51 UTC │
	│ delete  │ -p cert-options-771620                                                                                                                                                                                                                        │ cert-options-771620       │ jenkins │ v1.37.0 │ 25 Oct 25 10:51 UTC │ 25 Oct 25 10:51 UTC │
	│ start   │ -p old-k8s-version-031983 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-031983    │ jenkins │ v1.37.0 │ 25 Oct 25 10:51 UTC │ 25 Oct 25 10:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-031983 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-031983    │ jenkins │ v1.37.0 │ 25 Oct 25 10:52 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:51:43
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:51:43.616866  437544 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:51:43.617042  437544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:51:43.617072  437544 out.go:374] Setting ErrFile to fd 2...
	I1025 10:51:43.617094  437544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:51:43.617379  437544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:51:43.617844  437544 out.go:368] Setting JSON to false
	I1025 10:51:43.618885  437544 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9255,"bootTime":1761380249,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 10:51:43.618985  437544 start.go:141] virtualization:  
	I1025 10:51:43.622860  437544 out.go:179] * [old-k8s-version-031983] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:51:43.627435  437544 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:51:43.627604  437544 notify.go:220] Checking for updates...
	I1025 10:51:43.634339  437544 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:51:43.637875  437544 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:51:43.641124  437544 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 10:51:43.644376  437544 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:51:43.647535  437544 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:51:43.651223  437544 config.go:182] Loaded profile config "cert-expiration-736062": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:51:43.651341  437544 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:51:43.675591  437544 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:51:43.675733  437544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:51:43.738084  437544 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:51:43.725900664 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:51:43.738206  437544 docker.go:318] overlay module found
	I1025 10:51:43.741828  437544 out.go:179] * Using the docker driver based on user configuration
	I1025 10:51:43.745129  437544 start.go:305] selected driver: docker
	I1025 10:51:43.745154  437544 start.go:925] validating driver "docker" against <nil>
	I1025 10:51:43.745172  437544 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:51:43.745967  437544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:51:43.813723  437544 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:51:43.804705215 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:51:43.813882  437544 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 10:51:43.814166  437544 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:51:43.817197  437544 out.go:179] * Using Docker driver with root privileges
	I1025 10:51:43.820338  437544 cni.go:84] Creating CNI manager for ""
	I1025 10:51:43.820406  437544 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:51:43.820423  437544 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:51:43.820508  437544 start.go:349] cluster config:
	{Name:old-k8s-version-031983 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-031983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:51:43.823659  437544 out.go:179] * Starting "old-k8s-version-031983" primary control-plane node in "old-k8s-version-031983" cluster
	I1025 10:51:43.826628  437544 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:51:43.829773  437544 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:51:43.832701  437544 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 10:51:43.832774  437544 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1025 10:51:43.832787  437544 cache.go:58] Caching tarball of preloaded images
	I1025 10:51:43.832796  437544 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:51:43.832892  437544 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:51:43.832903  437544 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1025 10:51:43.833022  437544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/config.json ...
	I1025 10:51:43.833048  437544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/config.json: {Name:mk6ff72a2486fd505f3edc7663c339eb3165b32f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:51:43.852621  437544 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:51:43.852644  437544 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:51:43.852663  437544 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:51:43.852698  437544 start.go:360] acquireMachinesLock for old-k8s-version-031983: {Name:mkea21c13c631a617ed8bc5861a3bc5db7c7a81f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:51:43.852809  437544 start.go:364] duration metric: took 89.298µs to acquireMachinesLock for "old-k8s-version-031983"
	I1025 10:51:43.852841  437544 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-031983 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-031983 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:51:43.852915  437544 start.go:125] createHost starting for "" (driver="docker")
	I1025 10:51:43.856251  437544 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:51:43.856486  437544 start.go:159] libmachine.API.Create for "old-k8s-version-031983" (driver="docker")
	I1025 10:51:43.856524  437544 client.go:168] LocalClient.Create starting
	I1025 10:51:43.856609  437544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem
	I1025 10:51:43.856650  437544 main.go:141] libmachine: Decoding PEM data...
	I1025 10:51:43.856670  437544 main.go:141] libmachine: Parsing certificate...
	I1025 10:51:43.856728  437544 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem
	I1025 10:51:43.856751  437544 main.go:141] libmachine: Decoding PEM data...
	I1025 10:51:43.856764  437544 main.go:141] libmachine: Parsing certificate...
	I1025 10:51:43.857138  437544 cli_runner.go:164] Run: docker network inspect old-k8s-version-031983 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:51:43.873811  437544 cli_runner.go:211] docker network inspect old-k8s-version-031983 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:51:43.873900  437544 network_create.go:284] running [docker network inspect old-k8s-version-031983] to gather additional debugging logs...
	I1025 10:51:43.873920  437544 cli_runner.go:164] Run: docker network inspect old-k8s-version-031983
	W1025 10:51:43.891132  437544 cli_runner.go:211] docker network inspect old-k8s-version-031983 returned with exit code 1
	I1025 10:51:43.891175  437544 network_create.go:287] error running [docker network inspect old-k8s-version-031983]: docker network inspect old-k8s-version-031983: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-031983 not found
	I1025 10:51:43.891196  437544 network_create.go:289] output of [docker network inspect old-k8s-version-031983]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-031983 not found
	
	** /stderr **
	I1025 10:51:43.891319  437544 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:51:43.909578  437544 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2218a4d410c8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:56:a0:c3:54:c6:1f} reservation:<nil>}
	I1025 10:51:43.909943  437544 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-249eaf2d238d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:87:b9:4d:4c:0d} reservation:<nil>}
	I1025 10:51:43.910217  437544 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-210d4b236ff6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:d5:32:45:e6:85} reservation:<nil>}
	I1025 10:51:43.910479  437544 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-68d802b572db IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2a:a6:32:64:ab:5e} reservation:<nil>}
	I1025 10:51:43.910918  437544 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a79b0}
	I1025 10:51:43.910936  437544 network_create.go:124] attempt to create docker network old-k8s-version-031983 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1025 10:51:43.910994  437544 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-031983 old-k8s-version-031983
	I1025 10:51:43.971243  437544 network_create.go:108] docker network old-k8s-version-031983 192.168.85.0/24 created
	I1025 10:51:43.971279  437544 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-031983" container
	I1025 10:51:43.971364  437544 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:51:43.987643  437544 cli_runner.go:164] Run: docker volume create old-k8s-version-031983 --label name.minikube.sigs.k8s.io=old-k8s-version-031983 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:51:44.012806  437544 oci.go:103] Successfully created a docker volume old-k8s-version-031983
	I1025 10:51:44.012922  437544 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-031983-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-031983 --entrypoint /usr/bin/test -v old-k8s-version-031983:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:51:44.571486  437544 oci.go:107] Successfully prepared a docker volume old-k8s-version-031983
	I1025 10:51:44.571544  437544 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 10:51:44.571566  437544 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 10:51:44.571643  437544 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-031983:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 10:51:50.188937  437544 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-031983:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.617238518s)
	I1025 10:51:50.188977  437544 kic.go:203] duration metric: took 5.617407356s to extract preloaded images to volume ...
	W1025 10:51:50.189129  437544 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 10:51:50.189241  437544 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 10:51:50.246253  437544 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-031983 --name old-k8s-version-031983 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-031983 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-031983 --network old-k8s-version-031983 --ip 192.168.85.2 --volume old-k8s-version-031983:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 10:51:50.549645  437544 cli_runner.go:164] Run: docker container inspect old-k8s-version-031983 --format={{.State.Running}}
	I1025 10:51:50.567892  437544 cli_runner.go:164] Run: docker container inspect old-k8s-version-031983 --format={{.State.Status}}
	I1025 10:51:50.593371  437544 cli_runner.go:164] Run: docker exec old-k8s-version-031983 stat /var/lib/dpkg/alternatives/iptables
	I1025 10:51:50.647441  437544 oci.go:144] the created container "old-k8s-version-031983" has a running status.
	I1025 10:51:50.647469  437544 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/old-k8s-version-031983/id_rsa...
	I1025 10:51:51.272736  437544 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-259409/.minikube/machines/old-k8s-version-031983/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 10:51:51.304913  437544 cli_runner.go:164] Run: docker container inspect old-k8s-version-031983 --format={{.State.Status}}
	I1025 10:51:51.324093  437544 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 10:51:51.324118  437544 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-031983 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 10:51:51.376752  437544 cli_runner.go:164] Run: docker container inspect old-k8s-version-031983 --format={{.State.Status}}
	I1025 10:51:51.396019  437544 machine.go:93] provisionDockerMachine start ...
	I1025 10:51:51.396135  437544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:51:51.414212  437544 main.go:141] libmachine: Using SSH client type: native
	I1025 10:51:51.414603  437544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1025 10:51:51.414625  437544 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:51:51.415414  437544 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 10:51:54.577749  437544 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-031983
	
	I1025 10:51:54.577772  437544 ubuntu.go:182] provisioning hostname "old-k8s-version-031983"
	I1025 10:51:54.577838  437544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:51:54.594854  437544 main.go:141] libmachine: Using SSH client type: native
	I1025 10:51:54.595171  437544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1025 10:51:54.595186  437544 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-031983 && echo "old-k8s-version-031983" | sudo tee /etc/hostname
	I1025 10:51:54.756089  437544 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-031983
	
	I1025 10:51:54.756194  437544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:51:54.775448  437544 main.go:141] libmachine: Using SSH client type: native
	I1025 10:51:54.775756  437544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1025 10:51:54.775789  437544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-031983' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-031983/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-031983' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:51:54.926172  437544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:51:54.926197  437544 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:51:54.926215  437544 ubuntu.go:190] setting up certificates
	I1025 10:51:54.926226  437544 provision.go:84] configureAuth start
	I1025 10:51:54.926294  437544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-031983
	I1025 10:51:54.943229  437544 provision.go:143] copyHostCerts
	I1025 10:51:54.943307  437544 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:51:54.943323  437544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:51:54.943418  437544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:51:54.943533  437544 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:51:54.943546  437544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:51:54.943582  437544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:51:54.943643  437544 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:51:54.943653  437544 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:51:54.943678  437544 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:51:54.943732  437544 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-031983 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-031983]
	I1025 10:51:55.412552  437544 provision.go:177] copyRemoteCerts
	I1025 10:51:55.412649  437544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:51:55.412705  437544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:51:55.430792  437544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/old-k8s-version-031983/id_rsa Username:docker}
	I1025 10:51:55.537743  437544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:51:55.555732  437544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1025 10:51:55.574156  437544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:51:55.592420  437544 provision.go:87] duration metric: took 666.169101ms to configureAuth
	I1025 10:51:55.592528  437544 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:51:55.592748  437544 config.go:182] Loaded profile config "old-k8s-version-031983": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:51:55.592892  437544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:51:55.609747  437544 main.go:141] libmachine: Using SSH client type: native
	I1025 10:51:55.610143  437544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1025 10:51:55.610168  437544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:51:55.871094  437544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:51:55.871186  437544 machine.go:96] duration metric: took 4.475139384s to provisionDockerMachine
	I1025 10:51:55.871212  437544 client.go:171] duration metric: took 12.014677637s to LocalClient.Create
	I1025 10:51:55.871269  437544 start.go:167] duration metric: took 12.014783147s to libmachine.API.Create "old-k8s-version-031983"
	I1025 10:51:55.871297  437544 start.go:293] postStartSetup for "old-k8s-version-031983" (driver="docker")
	I1025 10:51:55.871343  437544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:51:55.871455  437544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:51:55.871518  437544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:51:55.889046  437544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/old-k8s-version-031983/id_rsa Username:docker}
	I1025 10:51:55.994215  437544 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:51:55.998088  437544 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:51:55.998121  437544 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:51:55.998135  437544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:51:55.998220  437544 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:51:55.998306  437544 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:51:55.998432  437544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:51:56.009198  437544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:51:56.028704  437544 start.go:296] duration metric: took 157.356977ms for postStartSetup
	I1025 10:51:56.029100  437544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-031983
	I1025 10:51:56.047923  437544 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/config.json ...
	I1025 10:51:56.048233  437544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:51:56.048281  437544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:51:56.065960  437544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/old-k8s-version-031983/id_rsa Username:docker}
	I1025 10:51:56.167242  437544 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:51:56.172276  437544 start.go:128] duration metric: took 12.319342604s to createHost
	I1025 10:51:56.172308  437544 start.go:83] releasing machines lock for "old-k8s-version-031983", held for 12.319478819s
	I1025 10:51:56.172380  437544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-031983
	I1025 10:51:56.192928  437544 ssh_runner.go:195] Run: cat /version.json
	I1025 10:51:56.192941  437544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:51:56.192983  437544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:51:56.193000  437544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:51:56.213510  437544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/old-k8s-version-031983/id_rsa Username:docker}
	I1025 10:51:56.233335  437544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/old-k8s-version-031983/id_rsa Username:docker}
	I1025 10:51:56.420107  437544 ssh_runner.go:195] Run: systemctl --version
	I1025 10:51:56.426849  437544 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:51:56.464616  437544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:51:56.468975  437544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:51:56.469116  437544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:51:56.498349  437544 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 10:51:56.498373  437544 start.go:495] detecting cgroup driver to use...
	I1025 10:51:56.498434  437544 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:51:56.498518  437544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:51:56.517029  437544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:51:56.536550  437544 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:51:56.536666  437544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:51:56.554591  437544 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:51:56.572535  437544 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:51:56.707107  437544 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:51:56.837644  437544 docker.go:234] disabling docker service ...
	I1025 10:51:56.837712  437544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:51:56.861227  437544 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:51:56.875609  437544 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:51:57.004654  437544 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:51:57.132480  437544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:51:57.145691  437544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:51:57.160322  437544 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1025 10:51:57.160393  437544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:51:57.170372  437544 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:51:57.170443  437544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:51:57.180668  437544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:51:57.189875  437544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:51:57.198941  437544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:51:57.207204  437544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:51:57.216920  437544 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:51:57.234437  437544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:51:57.243418  437544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:51:57.252048  437544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:51:57.259512  437544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:51:57.386899  437544 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:51:57.521148  437544 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:51:57.521230  437544 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:51:57.525487  437544 start.go:563] Will wait 60s for crictl version
	I1025 10:51:57.525571  437544 ssh_runner.go:195] Run: which crictl
	I1025 10:51:57.529679  437544 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:51:57.557937  437544 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:51:57.558071  437544 ssh_runner.go:195] Run: crio --version
	I1025 10:51:57.591097  437544 ssh_runner.go:195] Run: crio --version
	I1025 10:51:57.625082  437544 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1025 10:51:57.628044  437544 cli_runner.go:164] Run: docker network inspect old-k8s-version-031983 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:51:57.644319  437544 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 10:51:57.648402  437544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:51:57.659008  437544 kubeadm.go:883] updating cluster {Name:old-k8s-version-031983 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-031983 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:51:57.659120  437544 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 10:51:57.659190  437544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:51:57.693001  437544 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:51:57.693021  437544 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:51:57.693078  437544 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:51:57.719776  437544 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:51:57.719798  437544 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:51:57.719806  437544 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1025 10:51:57.719887  437544 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-031983 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-031983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:51:57.719971  437544 ssh_runner.go:195] Run: crio config
	I1025 10:51:57.784039  437544 cni.go:84] Creating CNI manager for ""
	I1025 10:51:57.784060  437544 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:51:57.784080  437544 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:51:57.784104  437544 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-031983 NodeName:old-k8s-version-031983 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:51:57.784254  437544 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-031983"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:51:57.784334  437544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1025 10:51:57.792366  437544 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:51:57.792485  437544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:51:57.800327  437544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1025 10:51:57.815306  437544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:51:57.828882  437544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1025 10:51:57.842050  437544 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:51:57.846173  437544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:51:57.855965  437544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:51:57.983745  437544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:51:58.006862  437544 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983 for IP: 192.168.85.2
	I1025 10:51:58.006890  437544 certs.go:195] generating shared ca certs ...
	I1025 10:51:58.006910  437544 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:51:58.007081  437544 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:51:58.007130  437544 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:51:58.007142  437544 certs.go:257] generating profile certs ...
	I1025 10:51:58.007201  437544 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/client.key
	I1025 10:51:58.007222  437544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/client.crt with IP's: []
	I1025 10:51:59.479444  437544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/client.crt ...
	I1025 10:51:59.479480  437544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/client.crt: {Name:mk0d67bdaacb0f91777d8e0a70ab60eeeb2ce238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:51:59.479666  437544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/client.key ...
	I1025 10:51:59.479681  437544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/client.key: {Name:mkad41af6d4ed707d2cc28f0a4538d8c368903b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:51:59.479758  437544 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/apiserver.key.11393817
	I1025 10:51:59.479772  437544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/apiserver.crt.11393817 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1025 10:52:00.199849  437544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/apiserver.crt.11393817 ...
	I1025 10:52:00.199892  437544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/apiserver.crt.11393817: {Name:mkbcab462616a4ade60642273ac8ac21c88a2200 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:52:00.200096  437544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/apiserver.key.11393817 ...
	I1025 10:52:00.200108  437544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/apiserver.key.11393817: {Name:mkf5d6157e68212eee10f105e9c5c4eb2a82700d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:52:00.200185  437544 certs.go:382] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/apiserver.crt.11393817 -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/apiserver.crt
	I1025 10:52:00.200310  437544 certs.go:386] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/apiserver.key.11393817 -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/apiserver.key
	I1025 10:52:00.200376  437544 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/proxy-client.key
	I1025 10:52:00.200391  437544 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/proxy-client.crt with IP's: []
	I1025 10:52:00.294702  437544 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/proxy-client.crt ...
	I1025 10:52:00.294746  437544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/proxy-client.crt: {Name:mka1d712988f266a3bb21be5b749524cc177f45b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:52:00.294989  437544 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/proxy-client.key ...
	I1025 10:52:00.295000  437544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/proxy-client.key: {Name:mka7d0b6514e9abda12945b87368ea24f871919c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:52:00.295196  437544 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:52:00.295235  437544 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:52:00.295245  437544 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:52:00.295281  437544 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:52:00.295304  437544 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:52:00.295327  437544 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:52:00.295372  437544 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:52:00.296139  437544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:52:00.362245  437544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:52:00.436862  437544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:52:00.562249  437544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:52:00.597193  437544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1025 10:52:00.633364  437544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 10:52:00.661689  437544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:52:00.683189  437544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:52:00.704122  437544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:52:00.726150  437544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:52:00.748075  437544 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:52:00.767532  437544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:52:00.781403  437544 ssh_runner.go:195] Run: openssl version
	I1025 10:52:00.788061  437544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:52:00.796765  437544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:52:00.800758  437544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:52:00.800851  437544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:52:00.842283  437544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:52:00.851161  437544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:52:00.859895  437544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:52:00.863721  437544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:52:00.863820  437544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:52:00.905327  437544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:52:00.913685  437544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:52:00.922149  437544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:52:00.927033  437544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:52:00.927130  437544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:52:00.968435  437544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:52:00.977165  437544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:52:00.980869  437544 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 10:52:00.980925  437544 kubeadm.go:400] StartCluster: {Name:old-k8s-version-031983 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-031983 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:52:00.980997  437544 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:52:00.981066  437544 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:52:01.011142  437544 cri.go:89] found id: ""
	I1025 10:52:01.011270  437544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:52:01.019647  437544 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:52:01.027824  437544 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 10:52:01.027956  437544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:52:01.036345  437544 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 10:52:01.036367  437544 kubeadm.go:157] found existing configuration files:
	
	I1025 10:52:01.036439  437544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 10:52:01.044993  437544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 10:52:01.045079  437544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:52:01.053052  437544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 10:52:01.061358  437544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 10:52:01.061423  437544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:52:01.069491  437544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 10:52:01.077518  437544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 10:52:01.077585  437544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:52:01.085499  437544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 10:52:01.093835  437544 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 10:52:01.093936  437544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:52:01.102115  437544 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 10:52:01.151286  437544 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1025 10:52:01.151513  437544 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:52:01.191074  437544 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 10:52:01.191151  437544 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 10:52:01.191192  437544 kubeadm.go:318] OS: Linux
	I1025 10:52:01.191242  437544 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 10:52:01.191294  437544 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 10:52:01.191364  437544 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 10:52:01.191418  437544 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 10:52:01.191470  437544 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 10:52:01.191522  437544 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 10:52:01.191571  437544 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 10:52:01.191623  437544 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 10:52:01.191674  437544 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 10:52:01.285790  437544 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:52:01.285907  437544 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:52:01.286070  437544 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 10:52:01.462385  437544 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 10:52:01.468461  437544 out.go:252]   - Generating certificates and keys ...
	I1025 10:52:01.468582  437544 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 10:52:01.468660  437544 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 10:52:02.071551  437544 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 10:52:02.675723  437544 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 10:52:03.482575  437544 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 10:52:03.706866  437544 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 10:52:04.254365  437544 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:52:04.254535  437544 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-031983] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 10:52:04.519694  437544 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:52:04.520010  437544 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-031983] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 10:52:04.933004  437544 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:52:06.816313  437544 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 10:52:07.124414  437544 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 10:52:07.124729  437544 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 10:52:07.805116  437544 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 10:52:08.180970  437544 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 10:52:09.110175  437544 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 10:52:09.515670  437544 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 10:52:09.516392  437544 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 10:52:09.523194  437544 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 10:52:09.526558  437544 out.go:252]   - Booting up control plane ...
	I1025 10:52:09.526681  437544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 10:52:09.526770  437544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 10:52:09.526866  437544 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 10:52:09.553538  437544 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 10:52:09.553967  437544 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 10:52:09.554040  437544 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 10:52:09.694541  437544 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 10:52:17.693235  437544 kubeadm.go:318] [apiclient] All control plane components are healthy after 8.002770 seconds
	I1025 10:52:17.693368  437544 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 10:52:17.710707  437544 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 10:52:18.241172  437544 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 10:52:18.241503  437544 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-031983 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 10:52:18.756097  437544 kubeadm.go:318] [bootstrap-token] Using token: wd9a46.887cznt6g00np37h
	I1025 10:52:18.759137  437544 out.go:252]   - Configuring RBAC rules ...
	I1025 10:52:18.759271  437544 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 10:52:18.764510  437544 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 10:52:18.775649  437544 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 10:52:18.780271  437544 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 10:52:18.786907  437544 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 10:52:18.791232  437544 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 10:52:18.807402  437544 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 10:52:19.103796  437544 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 10:52:19.170845  437544 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 10:52:19.172059  437544 kubeadm.go:318] 
	I1025 10:52:19.172140  437544 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 10:52:19.172151  437544 kubeadm.go:318] 
	I1025 10:52:19.172232  437544 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 10:52:19.172241  437544 kubeadm.go:318] 
	I1025 10:52:19.172268  437544 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 10:52:19.172333  437544 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 10:52:19.172400  437544 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 10:52:19.172410  437544 kubeadm.go:318] 
	I1025 10:52:19.172467  437544 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 10:52:19.172476  437544 kubeadm.go:318] 
	I1025 10:52:19.172526  437544 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 10:52:19.172535  437544 kubeadm.go:318] 
	I1025 10:52:19.172599  437544 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 10:52:19.172681  437544 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 10:52:19.172761  437544 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 10:52:19.172768  437544 kubeadm.go:318] 
	I1025 10:52:19.172865  437544 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 10:52:19.172957  437544 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 10:52:19.172966  437544 kubeadm.go:318] 
	I1025 10:52:19.173066  437544 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token wd9a46.887cznt6g00np37h \
	I1025 10:52:19.173178  437544 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:537c864aeffbd98a53a718d258f7bd1f28aa4a0d15716971a35abdfdaaeb28d5 \
	I1025 10:52:19.173205  437544 kubeadm.go:318] 	--control-plane 
	I1025 10:52:19.173218  437544 kubeadm.go:318] 
	I1025 10:52:19.173307  437544 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 10:52:19.173316  437544 kubeadm.go:318] 
	I1025 10:52:19.173401  437544 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token wd9a46.887cznt6g00np37h \
	I1025 10:52:19.173511  437544 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:537c864aeffbd98a53a718d258f7bd1f28aa4a0d15716971a35abdfdaaeb28d5 
	I1025 10:52:19.180929  437544 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 10:52:19.181065  437544 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 10:52:19.181109  437544 cni.go:84] Creating CNI manager for ""
	I1025 10:52:19.181123  437544 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:52:19.186148  437544 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 10:52:19.189076  437544 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 10:52:19.201935  437544 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1025 10:52:19.201963  437544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 10:52:19.224715  437544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 10:52:20.451563  437544 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.226769031s)
	I1025 10:52:20.451613  437544 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 10:52:20.451812  437544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:52:20.451969  437544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-031983 minikube.k8s.io/updated_at=2025_10_25T10_52_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689 minikube.k8s.io/name=old-k8s-version-031983 minikube.k8s.io/primary=true
	I1025 10:52:20.745523  437544 ops.go:34] apiserver oom_adj: -16
	I1025 10:52:20.745652  437544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:52:21.246071  437544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:52:21.746053  437544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:52:22.246322  437544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:52:22.745792  437544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:52:23.246656  437544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:52:23.746287  437544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:52:24.246608  437544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:52:24.746738  437544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:52:25.246529  437544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:52:25.745754  437544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:52:26.246523  437544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:52:26.746038  437544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:52:27.245746  437544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:52:27.746644  437544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:52:28.245774  437544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:52:28.746421  437544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:52:29.245836  437544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:52:29.745945  437544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:52:30.246427  437544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:52:30.746425  437544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:52:31.246218  437544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:52:31.746551  437544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:52:31.844859  437544 kubeadm.go:1113] duration metric: took 11.393089293s to wait for elevateKubeSystemPrivileges
	I1025 10:52:31.844896  437544 kubeadm.go:402] duration metric: took 30.863974186s to StartCluster
	I1025 10:52:31.844916  437544 settings.go:142] acquiring lock: {Name:mk78071aea90144fb2e8f63b90de4792d7a3522a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:52:31.845015  437544 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:52:31.846059  437544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:52:31.846302  437544 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:52:31.846422  437544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 10:52:31.846682  437544 config.go:182] Loaded profile config "old-k8s-version-031983": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:52:31.846797  437544 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:52:31.846863  437544 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-031983"
	I1025 10:52:31.846880  437544 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-031983"
	I1025 10:52:31.846909  437544 host.go:66] Checking if "old-k8s-version-031983" exists ...
	I1025 10:52:31.847343  437544 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-031983"
	I1025 10:52:31.847374  437544 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-031983"
	I1025 10:52:31.847721  437544 cli_runner.go:164] Run: docker container inspect old-k8s-version-031983 --format={{.State.Status}}
	I1025 10:52:31.847725  437544 cli_runner.go:164] Run: docker container inspect old-k8s-version-031983 --format={{.State.Status}}
	I1025 10:52:31.852377  437544 out.go:179] * Verifying Kubernetes components...
	I1025 10:52:31.855377  437544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:52:31.883182  437544 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:52:31.886606  437544 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:52:31.886628  437544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:52:31.886692  437544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:52:31.890143  437544 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-031983"
	I1025 10:52:31.890181  437544 host.go:66] Checking if "old-k8s-version-031983" exists ...
	I1025 10:52:31.891244  437544 cli_runner.go:164] Run: docker container inspect old-k8s-version-031983 --format={{.State.Status}}
	I1025 10:52:31.920788  437544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/old-k8s-version-031983/id_rsa Username:docker}
	I1025 10:52:31.934985  437544 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:52:31.935005  437544 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:52:31.935068  437544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:52:31.964307  437544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/old-k8s-version-031983/id_rsa Username:docker}
	I1025 10:52:32.128937  437544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:52:32.154251  437544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:52:32.154317  437544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 10:52:32.243160  437544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:52:33.571893  437544 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.442872138s)
	I1025 10:52:33.571937  437544 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.417659105s)
	I1025 10:52:33.573637  437544 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-031983" to be "Ready" ...
	I1025 10:52:33.574999  437544 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.420625295s)
	I1025 10:52:33.575106  437544 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1025 10:52:33.578490  437544 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.335291842s)
	I1025 10:52:33.623263  437544 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 10:52:33.626397  437544 addons.go:514] duration metric: took 1.779580041s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 10:52:34.079885  437544 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-031983" context rescaled to 1 replicas
	W1025 10:52:35.576887  437544 node_ready.go:57] node "old-k8s-version-031983" has "Ready":"False" status (will retry)
	W1025 10:52:38.077563  437544 node_ready.go:57] node "old-k8s-version-031983" has "Ready":"False" status (will retry)
	W1025 10:52:40.078093  437544 node_ready.go:57] node "old-k8s-version-031983" has "Ready":"False" status (will retry)
	W1025 10:52:42.084336  437544 node_ready.go:57] node "old-k8s-version-031983" has "Ready":"False" status (will retry)
	W1025 10:52:44.577301  437544 node_ready.go:57] node "old-k8s-version-031983" has "Ready":"False" status (will retry)
	I1025 10:52:46.578104  437544 node_ready.go:49] node "old-k8s-version-031983" is "Ready"
	I1025 10:52:46.578131  437544 node_ready.go:38] duration metric: took 13.004412809s for node "old-k8s-version-031983" to be "Ready" ...
	I1025 10:52:46.578145  437544 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:52:46.578204  437544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:52:46.591846  437544 api_server.go:72] duration metric: took 14.74550437s to wait for apiserver process to appear ...
	I1025 10:52:46.591868  437544 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:52:46.591888  437544 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 10:52:46.611700  437544 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 10:52:46.612989  437544 api_server.go:141] control plane version: v1.28.0
	I1025 10:52:46.613013  437544 api_server.go:131] duration metric: took 21.137262ms to wait for apiserver health ...
	I1025 10:52:46.613022  437544 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:52:46.617232  437544 system_pods.go:59] 8 kube-system pods found
	I1025 10:52:46.617313  437544 system_pods.go:61] "coredns-5dd5756b68-jd2rz" [24ce3549-a06c-405e-943d-2982e2ee63de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:52:46.617337  437544 system_pods.go:61] "etcd-old-k8s-version-031983" [7afeb15f-fef7-4c88-ba96-7cd4bd24b4a4] Running
	I1025 10:52:46.617380  437544 system_pods.go:61] "kindnet-2sbx5" [b129bb16-e936-4865-b06a-a71756a88fa9] Running
	I1025 10:52:46.617410  437544 system_pods.go:61] "kube-apiserver-old-k8s-version-031983" [9a0fa9ff-e383-482b-9217-5089637f3579] Running
	I1025 10:52:46.617433  437544 system_pods.go:61] "kube-controller-manager-old-k8s-version-031983" [1e037440-e4e2-4392-8c8d-ac2bcceb2723] Running
	I1025 10:52:46.617469  437544 system_pods.go:61] "kube-proxy-q597g" [21cc5901-1ab1-495b-9b85-3812b03b4ddc] Running
	I1025 10:52:46.617495  437544 system_pods.go:61] "kube-scheduler-old-k8s-version-031983" [e163349f-3264-496f-b34f-7ad2a108c7fb] Running
	I1025 10:52:46.617522  437544 system_pods.go:61] "storage-provisioner" [7a27f19d-8bc4-4730-bb35-fd6d4311ef52] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:52:46.617561  437544 system_pods.go:74] duration metric: took 4.531512ms to wait for pod list to return data ...
	I1025 10:52:46.617589  437544 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:52:46.620008  437544 default_sa.go:45] found service account: "default"
	I1025 10:52:46.620072  437544 default_sa.go:55] duration metric: took 2.462135ms for default service account to be created ...
	I1025 10:52:46.620096  437544 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:52:46.623419  437544 system_pods.go:86] 8 kube-system pods found
	I1025 10:52:46.623504  437544 system_pods.go:89] "coredns-5dd5756b68-jd2rz" [24ce3549-a06c-405e-943d-2982e2ee63de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:52:46.623526  437544 system_pods.go:89] "etcd-old-k8s-version-031983" [7afeb15f-fef7-4c88-ba96-7cd4bd24b4a4] Running
	I1025 10:52:46.623568  437544 system_pods.go:89] "kindnet-2sbx5" [b129bb16-e936-4865-b06a-a71756a88fa9] Running
	I1025 10:52:46.623594  437544 system_pods.go:89] "kube-apiserver-old-k8s-version-031983" [9a0fa9ff-e383-482b-9217-5089637f3579] Running
	I1025 10:52:46.623621  437544 system_pods.go:89] "kube-controller-manager-old-k8s-version-031983" [1e037440-e4e2-4392-8c8d-ac2bcceb2723] Running
	I1025 10:52:46.623658  437544 system_pods.go:89] "kube-proxy-q597g" [21cc5901-1ab1-495b-9b85-3812b03b4ddc] Running
	I1025 10:52:46.623683  437544 system_pods.go:89] "kube-scheduler-old-k8s-version-031983" [e163349f-3264-496f-b34f-7ad2a108c7fb] Running
	I1025 10:52:46.623709  437544 system_pods.go:89] "storage-provisioner" [7a27f19d-8bc4-4730-bb35-fd6d4311ef52] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:52:46.623762  437544 retry.go:31] will retry after 218.085632ms: missing components: kube-dns
	I1025 10:52:46.846095  437544 system_pods.go:86] 8 kube-system pods found
	I1025 10:52:46.846130  437544 system_pods.go:89] "coredns-5dd5756b68-jd2rz" [24ce3549-a06c-405e-943d-2982e2ee63de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:52:46.846138  437544 system_pods.go:89] "etcd-old-k8s-version-031983" [7afeb15f-fef7-4c88-ba96-7cd4bd24b4a4] Running
	I1025 10:52:46.846144  437544 system_pods.go:89] "kindnet-2sbx5" [b129bb16-e936-4865-b06a-a71756a88fa9] Running
	I1025 10:52:46.846183  437544 system_pods.go:89] "kube-apiserver-old-k8s-version-031983" [9a0fa9ff-e383-482b-9217-5089637f3579] Running
	I1025 10:52:46.846197  437544 system_pods.go:89] "kube-controller-manager-old-k8s-version-031983" [1e037440-e4e2-4392-8c8d-ac2bcceb2723] Running
	I1025 10:52:46.846202  437544 system_pods.go:89] "kube-proxy-q597g" [21cc5901-1ab1-495b-9b85-3812b03b4ddc] Running
	I1025 10:52:46.846208  437544 system_pods.go:89] "kube-scheduler-old-k8s-version-031983" [e163349f-3264-496f-b34f-7ad2a108c7fb] Running
	I1025 10:52:46.846214  437544 system_pods.go:89] "storage-provisioner" [7a27f19d-8bc4-4730-bb35-fd6d4311ef52] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:52:46.846248  437544 retry.go:31] will retry after 263.906276ms: missing components: kube-dns
	I1025 10:52:47.114804  437544 system_pods.go:86] 8 kube-system pods found
	I1025 10:52:47.114839  437544 system_pods.go:89] "coredns-5dd5756b68-jd2rz" [24ce3549-a06c-405e-943d-2982e2ee63de] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:52:47.114846  437544 system_pods.go:89] "etcd-old-k8s-version-031983" [7afeb15f-fef7-4c88-ba96-7cd4bd24b4a4] Running
	I1025 10:52:47.114852  437544 system_pods.go:89] "kindnet-2sbx5" [b129bb16-e936-4865-b06a-a71756a88fa9] Running
	I1025 10:52:47.114890  437544 system_pods.go:89] "kube-apiserver-old-k8s-version-031983" [9a0fa9ff-e383-482b-9217-5089637f3579] Running
	I1025 10:52:47.114908  437544 system_pods.go:89] "kube-controller-manager-old-k8s-version-031983" [1e037440-e4e2-4392-8c8d-ac2bcceb2723] Running
	I1025 10:52:47.114913  437544 system_pods.go:89] "kube-proxy-q597g" [21cc5901-1ab1-495b-9b85-3812b03b4ddc] Running
	I1025 10:52:47.114918  437544 system_pods.go:89] "kube-scheduler-old-k8s-version-031983" [e163349f-3264-496f-b34f-7ad2a108c7fb] Running
	I1025 10:52:47.114927  437544 system_pods.go:89] "storage-provisioner" [7a27f19d-8bc4-4730-bb35-fd6d4311ef52] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:52:47.114950  437544 retry.go:31] will retry after 455.836612ms: missing components: kube-dns
	I1025 10:52:47.576399  437544 system_pods.go:86] 8 kube-system pods found
	I1025 10:52:47.576432  437544 system_pods.go:89] "coredns-5dd5756b68-jd2rz" [24ce3549-a06c-405e-943d-2982e2ee63de] Running
	I1025 10:52:47.576440  437544 system_pods.go:89] "etcd-old-k8s-version-031983" [7afeb15f-fef7-4c88-ba96-7cd4bd24b4a4] Running
	I1025 10:52:47.576445  437544 system_pods.go:89] "kindnet-2sbx5" [b129bb16-e936-4865-b06a-a71756a88fa9] Running
	I1025 10:52:47.576449  437544 system_pods.go:89] "kube-apiserver-old-k8s-version-031983" [9a0fa9ff-e383-482b-9217-5089637f3579] Running
	I1025 10:52:47.576473  437544 system_pods.go:89] "kube-controller-manager-old-k8s-version-031983" [1e037440-e4e2-4392-8c8d-ac2bcceb2723] Running
	I1025 10:52:47.576483  437544 system_pods.go:89] "kube-proxy-q597g" [21cc5901-1ab1-495b-9b85-3812b03b4ddc] Running
	I1025 10:52:47.576489  437544 system_pods.go:89] "kube-scheduler-old-k8s-version-031983" [e163349f-3264-496f-b34f-7ad2a108c7fb] Running
	I1025 10:52:47.576497  437544 system_pods.go:89] "storage-provisioner" [7a27f19d-8bc4-4730-bb35-fd6d4311ef52] Running
	I1025 10:52:47.576505  437544 system_pods.go:126] duration metric: took 956.391412ms to wait for k8s-apps to be running ...
	I1025 10:52:47.576513  437544 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:52:47.576579  437544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:52:47.590682  437544 system_svc.go:56] duration metric: took 14.158302ms WaitForService to wait for kubelet
	I1025 10:52:47.590761  437544 kubeadm.go:586] duration metric: took 15.744420927s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:52:47.590794  437544 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:52:47.593792  437544 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:52:47.593871  437544 node_conditions.go:123] node cpu capacity is 2
	I1025 10:52:47.593899  437544 node_conditions.go:105] duration metric: took 3.086683ms to run NodePressure ...
	I1025 10:52:47.593943  437544 start.go:241] waiting for startup goroutines ...
	I1025 10:52:47.593970  437544 start.go:246] waiting for cluster config update ...
	I1025 10:52:47.594042  437544 start.go:255] writing updated cluster config ...
	I1025 10:52:47.594368  437544 ssh_runner.go:195] Run: rm -f paused
	I1025 10:52:47.597953  437544 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:52:47.602574  437544 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-jd2rz" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:52:47.607870  437544 pod_ready.go:94] pod "coredns-5dd5756b68-jd2rz" is "Ready"
	I1025 10:52:47.607895  437544 pod_ready.go:86] duration metric: took 5.29403ms for pod "coredns-5dd5756b68-jd2rz" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:52:47.611197  437544 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-031983" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:52:47.616886  437544 pod_ready.go:94] pod "etcd-old-k8s-version-031983" is "Ready"
	I1025 10:52:47.616920  437544 pod_ready.go:86] duration metric: took 5.693608ms for pod "etcd-old-k8s-version-031983" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:52:47.620456  437544 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-031983" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:52:47.625925  437544 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-031983" is "Ready"
	I1025 10:52:47.625949  437544 pod_ready.go:86] duration metric: took 5.46716ms for pod "kube-apiserver-old-k8s-version-031983" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:52:47.629519  437544 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-031983" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:52:48.005598  437544 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-031983" is "Ready"
	I1025 10:52:48.005683  437544 pod_ready.go:86] duration metric: took 376.132706ms for pod "kube-controller-manager-old-k8s-version-031983" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:52:48.204130  437544 pod_ready.go:83] waiting for pod "kube-proxy-q597g" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:52:48.602998  437544 pod_ready.go:94] pod "kube-proxy-q597g" is "Ready"
	I1025 10:52:48.603075  437544 pod_ready.go:86] duration metric: took 398.919788ms for pod "kube-proxy-q597g" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:52:48.802690  437544 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-031983" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:52:49.201925  437544 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-031983" is "Ready"
	I1025 10:52:49.201953  437544 pod_ready.go:86] duration metric: took 399.235293ms for pod "kube-scheduler-old-k8s-version-031983" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:52:49.201966  437544 pod_ready.go:40] duration metric: took 1.603928105s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:52:49.257163  437544 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1025 10:52:49.260361  437544 out.go:203] 
	W1025 10:52:49.263315  437544 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1025 10:52:49.266377  437544 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1025 10:52:49.269728  437544 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-031983" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 10:52:46 old-k8s-version-031983 crio[841]: time="2025-10-25T10:52:46.5356399Z" level=info msg="Created container 1fb0dc4059b3341f70f41b48900f51235dfb4132390bc46a0a37f63cc69ddebe: kube-system/coredns-5dd5756b68-jd2rz/coredns" id=d2d23715-2a60-4134-9b57-bad1f275ca7d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:52:46 old-k8s-version-031983 crio[841]: time="2025-10-25T10:52:46.536255365Z" level=info msg="Starting container: 1fb0dc4059b3341f70f41b48900f51235dfb4132390bc46a0a37f63cc69ddebe" id=64ad96dd-0b2d-4af3-9628-a55581b56d74 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:52:46 old-k8s-version-031983 crio[841]: time="2025-10-25T10:52:46.547032014Z" level=info msg="Started container" PID=1956 containerID=1fb0dc4059b3341f70f41b48900f51235dfb4132390bc46a0a37f63cc69ddebe description=kube-system/coredns-5dd5756b68-jd2rz/coredns id=64ad96dd-0b2d-4af3-9628-a55581b56d74 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e314492d7daa51de06cfe2389d30254ef4973f09f4cb292911974fb048e98b51
	Oct 25 10:52:49 old-k8s-version-031983 crio[841]: time="2025-10-25T10:52:49.779487493Z" level=info msg="Running pod sandbox: default/busybox/POD" id=e627530b-0391-472f-a190-c3f2d0764d39 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:52:49 old-k8s-version-031983 crio[841]: time="2025-10-25T10:52:49.779572318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:52:49 old-k8s-version-031983 crio[841]: time="2025-10-25T10:52:49.78538244Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6a48c583d944a938e56a3564e98f99f2b89f111999b9f4e7230d90d7f3891ce1 UID:3ef55609-5cc4-4fa3-879c-98e876c9ac41 NetNS:/var/run/netns/cf9e02cf-e322-462c-b718-5129fc07357f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000498ce0}] Aliases:map[]}"
	Oct 25 10:52:49 old-k8s-version-031983 crio[841]: time="2025-10-25T10:52:49.785436159Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 25 10:52:49 old-k8s-version-031983 crio[841]: time="2025-10-25T10:52:49.797316121Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6a48c583d944a938e56a3564e98f99f2b89f111999b9f4e7230d90d7f3891ce1 UID:3ef55609-5cc4-4fa3-879c-98e876c9ac41 NetNS:/var/run/netns/cf9e02cf-e322-462c-b718-5129fc07357f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000498ce0}] Aliases:map[]}"
	Oct 25 10:52:49 old-k8s-version-031983 crio[841]: time="2025-10-25T10:52:49.797461665Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 25 10:52:49 old-k8s-version-031983 crio[841]: time="2025-10-25T10:52:49.800290163Z" level=info msg="Ran pod sandbox 6a48c583d944a938e56a3564e98f99f2b89f111999b9f4e7230d90d7f3891ce1 with infra container: default/busybox/POD" id=e627530b-0391-472f-a190-c3f2d0764d39 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:52:49 old-k8s-version-031983 crio[841]: time="2025-10-25T10:52:49.803708722Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cdcabe26-283d-4bc6-833f-75390ab3efc5 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:52:49 old-k8s-version-031983 crio[841]: time="2025-10-25T10:52:49.803918693Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=cdcabe26-283d-4bc6-833f-75390ab3efc5 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:52:49 old-k8s-version-031983 crio[841]: time="2025-10-25T10:52:49.803981734Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=cdcabe26-283d-4bc6-833f-75390ab3efc5 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:52:49 old-k8s-version-031983 crio[841]: time="2025-10-25T10:52:49.806299901Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=eb70f50e-9af4-4acf-a824-06c9b4a13b38 name=/runtime.v1.ImageService/PullImage
	Oct 25 10:52:49 old-k8s-version-031983 crio[841]: time="2025-10-25T10:52:49.809545289Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 25 10:52:51 old-k8s-version-031983 crio[841]: time="2025-10-25T10:52:51.776335038Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=eb70f50e-9af4-4acf-a824-06c9b4a13b38 name=/runtime.v1.ImageService/PullImage
	Oct 25 10:52:51 old-k8s-version-031983 crio[841]: time="2025-10-25T10:52:51.781849813Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d4dded37-df73-412c-b15c-7e648bad7136 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:52:51 old-k8s-version-031983 crio[841]: time="2025-10-25T10:52:51.785620441Z" level=info msg="Creating container: default/busybox/busybox" id=42fd6ba2-d920-468b-adbf-74a1e7dad467 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:52:51 old-k8s-version-031983 crio[841]: time="2025-10-25T10:52:51.785770941Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:52:51 old-k8s-version-031983 crio[841]: time="2025-10-25T10:52:51.791866661Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:52:51 old-k8s-version-031983 crio[841]: time="2025-10-25T10:52:51.792464287Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:52:51 old-k8s-version-031983 crio[841]: time="2025-10-25T10:52:51.812137847Z" level=info msg="Created container 056940c4f64e1e53dde4cfd285f16b3e704406e6345d653ccb0add350116c1c9: default/busybox/busybox" id=42fd6ba2-d920-468b-adbf-74a1e7dad467 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:52:51 old-k8s-version-031983 crio[841]: time="2025-10-25T10:52:51.815160587Z" level=info msg="Starting container: 056940c4f64e1e53dde4cfd285f16b3e704406e6345d653ccb0add350116c1c9" id=e2eb86c6-b67e-46d8-b0e1-dc0bb0b84c97 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:52:51 old-k8s-version-031983 crio[841]: time="2025-10-25T10:52:51.819729015Z" level=info msg="Started container" PID=2012 containerID=056940c4f64e1e53dde4cfd285f16b3e704406e6345d653ccb0add350116c1c9 description=default/busybox/busybox id=e2eb86c6-b67e-46d8-b0e1-dc0bb0b84c97 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6a48c583d944a938e56a3564e98f99f2b89f111999b9f4e7230d90d7f3891ce1
	Oct 25 10:52:58 old-k8s-version-031983 crio[841]: time="2025-10-25T10:52:58.61992837Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	056940c4f64e1       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   6a48c583d944a       busybox                                          default
	1fb0dc4059b33       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   e314492d7daa5       coredns-5dd5756b68-jd2rz                         kube-system
	9f6650565c49d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   732768cc83d1a       storage-provisioner                              kube-system
	d684937e58611       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   f46f7abcba367       kindnet-2sbx5                                    kube-system
	a2ec9a67a381b       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      28 seconds ago      Running             kube-proxy                0                   a3139e41091fd       kube-proxy-q597g                                 kube-system
	384d25cb3f199       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      48 seconds ago      Running             kube-scheduler            0                   7a983dfae7bf1       kube-scheduler-old-k8s-version-031983            kube-system
	67128e788e947       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      48 seconds ago      Running             kube-controller-manager   0                   219105181c2e9       kube-controller-manager-old-k8s-version-031983   kube-system
	a7deea45d0eb9       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      48 seconds ago      Running             etcd                      0                   fd0f7cccd3aec       etcd-old-k8s-version-031983                      kube-system
	da253cebfd6ab       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      48 seconds ago      Running             kube-apiserver            0                   516ee39cd293e       kube-apiserver-old-k8s-version-031983            kube-system
	
	
	==> coredns [1fb0dc4059b3341f70f41b48900f51235dfb4132390bc46a0a37f63cc69ddebe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38388 - 47919 "HINFO IN 8778709967806661550.5769820750492734599. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014596344s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-031983
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-031983
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=old-k8s-version-031983
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_52_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:52:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-031983
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:53:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:52:50 +0000   Sat, 25 Oct 2025 10:52:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:52:50 +0000   Sat, 25 Oct 2025 10:52:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:52:50 +0000   Sat, 25 Oct 2025 10:52:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:52:50 +0000   Sat, 25 Oct 2025 10:52:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-031983
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                d37866a6-3d06-4c4f-bdc5-afc6ab378351
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-jd2rz                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-old-k8s-version-031983                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         41s
	  kube-system                 kindnet-2sbx5                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-031983             250m (12%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-old-k8s-version-031983    200m (10%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-q597g                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-031983             100m (5%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  Starting                 49s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  49s (x8 over 49s)  kubelet          Node old-k8s-version-031983 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet          Node old-k8s-version-031983 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x8 over 49s)  kubelet          Node old-k8s-version-031983 status is now: NodeHasSufficientPID
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-031983 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-031983 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-031983 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s                node-controller  Node old-k8s-version-031983 event: Registered Node old-k8s-version-031983 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-031983 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct25 10:24] overlayfs: idmapped layers are currently not supported
	[Oct25 10:25] overlayfs: idmapped layers are currently not supported
	[Oct25 10:30] overlayfs: idmapped layers are currently not supported
	[Oct25 10:31] overlayfs: idmapped layers are currently not supported
	[Oct25 10:32] overlayfs: idmapped layers are currently not supported
	[Oct25 10:33] overlayfs: idmapped layers are currently not supported
	[Oct25 10:34] overlayfs: idmapped layers are currently not supported
	[Oct25 10:36] overlayfs: idmapped layers are currently not supported
	[ +23.146409] overlayfs: idmapped layers are currently not supported
	[ +12.090113] overlayfs: idmapped layers are currently not supported
	[Oct25 10:37] overlayfs: idmapped layers are currently not supported
	[ +24.992751] overlayfs: idmapped layers are currently not supported
	[Oct25 10:38] overlayfs: idmapped layers are currently not supported
	[Oct25 10:39] overlayfs: idmapped layers are currently not supported
	[Oct25 10:40] overlayfs: idmapped layers are currently not supported
	[Oct25 10:41] overlayfs: idmapped layers are currently not supported
	[Oct25 10:43] overlayfs: idmapped layers are currently not supported
	[Oct25 10:45] overlayfs: idmapped layers are currently not supported
	[ +32.163680] overlayfs: idmapped layers are currently not supported
	[Oct25 10:47] overlayfs: idmapped layers are currently not supported
	[Oct25 10:49] overlayfs: idmapped layers are currently not supported
	[Oct25 10:50] overlayfs: idmapped layers are currently not supported
	[Oct25 10:51] overlayfs: idmapped layers are currently not supported
	[  +5.236078] overlayfs: idmapped layers are currently not supported
	[Oct25 10:52] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a7deea45d0eb9a21bc3bb958a991270275a06516051c5a7823b5b9cc14b958f7] <==
	{"level":"info","ts":"2025-10-25T10:52:12.284949Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T10:52:12.285004Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T10:52:12.285051Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T10:52:12.284354Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-25T10:52:12.285542Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-25T10:52:12.286241Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-25T10:52:12.286385Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-25T10:52:12.443723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-25T10:52:12.443827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-25T10:52:12.443872Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-10-25T10:52:12.443913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-10-25T10:52:12.443971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-25T10:52:12.444008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-10-25T10:52:12.44404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-25T10:52:12.446136Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T10:52:12.450211Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-031983 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-25T10:52:12.450375Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T10:52:12.450992Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T10:52:12.451118Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T10:52:12.451167Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T10:52:12.451217Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T10:52:12.452091Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-25T10:52:12.456033Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-25T10:52:12.456083Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-25T10:52:12.464159Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 10:53:00 up  2:35,  0 user,  load average: 3.80, 3.76, 2.87
	Linux old-k8s-version-031983 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d684937e58611b6252d677a820db64d17c37cd51b9313c6c6cb109bd7457eada] <==
	I1025 10:52:35.422124       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:52:35.422365       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:52:35.422501       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:52:35.422521       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:52:35.422532       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:52:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:52:35.715017       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:52:35.723047       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:52:35.723077       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:52:35.723245       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 10:52:35.923227       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:52:35.923252       1 metrics.go:72] Registering metrics
	I1025 10:52:35.923316       1 controller.go:711] "Syncing nftables rules"
	I1025 10:52:45.626080       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:52:45.626176       1 main.go:301] handling current node
	I1025 10:52:55.625167       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:52:55.625208       1 main.go:301] handling current node
	
	
	==> kube-apiserver [da253cebfd6ab8b535a08befdca1911c53aced99522d86c82e8369348edb56a1] <==
	I1025 10:52:15.853625       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:52:15.856751       1 controller.go:624] quota admission added evaluator for: namespaces
	I1025 10:52:15.859291       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1025 10:52:15.862085       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1025 10:52:15.862296       1 aggregator.go:166] initial CRD sync complete...
	I1025 10:52:15.862335       1 autoregister_controller.go:141] Starting autoregister controller
	I1025 10:52:15.862363       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:52:15.862393       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:52:15.877577       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1025 10:52:15.904626       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:52:16.657953       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 10:52:16.663118       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 10:52:16.663141       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:52:17.384942       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:52:17.429761       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:52:17.536738       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 10:52:17.546353       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1025 10:52:17.547480       1 controller.go:624] quota admission added evaluator for: endpoints
	I1025 10:52:17.552641       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:52:17.807838       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1025 10:52:19.087321       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1025 10:52:19.101814       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 10:52:19.127704       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1025 10:52:30.970362       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1025 10:52:31.220781       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [67128e788e947254fffeda86e9cf1860ab32ca427ea12268c4c4285fe1905a81] <==
	I1025 10:52:30.864052       1 shared_informer.go:318] Caches are synced for crt configmap
	I1025 10:52:30.865183       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1025 10:52:30.976503       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1025 10:52:31.212819       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 10:52:31.212856       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1025 10:52:31.227313       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 10:52:31.239703       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2sbx5"
	I1025 10:52:31.249615       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q597g"
	I1025 10:52:31.676375       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-nvnt5"
	I1025 10:52:31.691258       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-jd2rz"
	I1025 10:52:31.699081       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="723.359547ms"
	I1025 10:52:31.713127       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.970672ms"
	I1025 10:52:31.732381       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.205151ms"
	I1025 10:52:31.732505       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.204µs"
	I1025 10:52:33.642091       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1025 10:52:33.689373       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-nvnt5"
	I1025 10:52:33.710802       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="68.423535ms"
	I1025 10:52:33.726319       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.466539ms"
	I1025 10:52:33.749098       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.504471ms"
	I1025 10:52:33.749246       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="87.927µs"
	I1025 10:52:46.136722       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.975µs"
	I1025 10:52:46.157102       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="807.187µs"
	I1025 10:52:47.512008       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.950072ms"
	I1025 10:52:47.512457       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.238µs"
	I1025 10:52:50.636647       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [a2ec9a67a381b70ed84fade525a6729c23394eca489f25abd03b524c1e7b16d0] <==
	I1025 10:52:32.343481       1 server_others.go:69] "Using iptables proxy"
	I1025 10:52:32.364777       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1025 10:52:32.405128       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:52:32.412642       1 server_others.go:152] "Using iptables Proxier"
	I1025 10:52:32.412688       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1025 10:52:32.412697       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1025 10:52:32.412729       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 10:52:32.412926       1 server.go:846] "Version info" version="v1.28.0"
	I1025 10:52:32.412935       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:52:32.414007       1 config.go:188] "Starting service config controller"
	I1025 10:52:32.414032       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 10:52:32.414051       1 config.go:97] "Starting endpoint slice config controller"
	I1025 10:52:32.414055       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 10:52:32.414588       1 config.go:315] "Starting node config controller"
	I1025 10:52:32.415655       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 10:52:32.517106       1 shared_informer.go:318] Caches are synced for node config
	I1025 10:52:32.517133       1 shared_informer.go:318] Caches are synced for service config
	I1025 10:52:32.517158       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [384d25cb3f199cf3272bdd25f236afa0162fcb93ab10f7641022721f9c561bcb] <==
	W1025 10:52:16.157338       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1025 10:52:16.157388       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1025 10:52:16.157659       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1025 10:52:16.157718       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1025 10:52:16.157801       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1025 10:52:16.157838       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1025 10:52:16.157955       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1025 10:52:16.158064       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1025 10:52:16.158171       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1025 10:52:16.158200       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1025 10:52:16.158183       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1025 10:52:16.158253       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1025 10:52:16.158269       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1025 10:52:16.158261       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1025 10:52:16.158335       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1025 10:52:16.158353       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1025 10:52:16.158445       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1025 10:52:16.158501       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1025 10:52:16.983561       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1025 10:52:16.983600       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1025 10:52:17.125752       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1025 10:52:17.125868       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1025 10:52:17.370790       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1025 10:52:17.370898       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1025 10:52:19.144872       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 25 10:52:31 old-k8s-version-031983 kubelet[1398]: I1025 10:52:31.349174    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b129bb16-e936-4865-b06a-a71756a88fa9-xtables-lock\") pod \"kindnet-2sbx5\" (UID: \"b129bb16-e936-4865-b06a-a71756a88fa9\") " pod="kube-system/kindnet-2sbx5"
	Oct 25 10:52:31 old-k8s-version-031983 kubelet[1398]: I1025 10:52:31.349197    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b129bb16-e936-4865-b06a-a71756a88fa9-lib-modules\") pod \"kindnet-2sbx5\" (UID: \"b129bb16-e936-4865-b06a-a71756a88fa9\") " pod="kube-system/kindnet-2sbx5"
	Oct 25 10:52:31 old-k8s-version-031983 kubelet[1398]: I1025 10:52:31.349219    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21cc5901-1ab1-495b-9b85-3812b03b4ddc-xtables-lock\") pod \"kube-proxy-q597g\" (UID: \"21cc5901-1ab1-495b-9b85-3812b03b4ddc\") " pod="kube-system/kube-proxy-q597g"
	Oct 25 10:52:31 old-k8s-version-031983 kubelet[1398]: E1025 10:52:31.460385    1398 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 25 10:52:31 old-k8s-version-031983 kubelet[1398]: E1025 10:52:31.460418    1398 projected.go:198] Error preparing data for projected volume kube-api-access-cqklf for pod kube-system/kube-proxy-q597g: configmap "kube-root-ca.crt" not found
	Oct 25 10:52:31 old-k8s-version-031983 kubelet[1398]: E1025 10:52:31.460493    1398 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21cc5901-1ab1-495b-9b85-3812b03b4ddc-kube-api-access-cqklf podName:21cc5901-1ab1-495b-9b85-3812b03b4ddc nodeName:}" failed. No retries permitted until 2025-10-25 10:52:31.960461702 +0000 UTC m=+12.908986627 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqklf" (UniqueName: "kubernetes.io/projected/21cc5901-1ab1-495b-9b85-3812b03b4ddc-kube-api-access-cqklf") pod "kube-proxy-q597g" (UID: "21cc5901-1ab1-495b-9b85-3812b03b4ddc") : configmap "kube-root-ca.crt" not found
	Oct 25 10:52:31 old-k8s-version-031983 kubelet[1398]: E1025 10:52:31.461471    1398 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 25 10:52:31 old-k8s-version-031983 kubelet[1398]: E1025 10:52:31.461505    1398 projected.go:198] Error preparing data for projected volume kube-api-access-97tg6 for pod kube-system/kindnet-2sbx5: configmap "kube-root-ca.crt" not found
	Oct 25 10:52:31 old-k8s-version-031983 kubelet[1398]: E1025 10:52:31.461548    1398 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b129bb16-e936-4865-b06a-a71756a88fa9-kube-api-access-97tg6 podName:b129bb16-e936-4865-b06a-a71756a88fa9 nodeName:}" failed. No retries permitted until 2025-10-25 10:52:31.96153235 +0000 UTC m=+12.910057275 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-97tg6" (UniqueName: "kubernetes.io/projected/b129bb16-e936-4865-b06a-a71756a88fa9-kube-api-access-97tg6") pod "kindnet-2sbx5" (UID: "b129bb16-e936-4865-b06a-a71756a88fa9") : configmap "kube-root-ca.crt" not found
	Oct 25 10:52:32 old-k8s-version-031983 kubelet[1398]: W1025 10:52:32.161759    1398 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c9e4fcd1d86890ade274f287b767861c437fe06d2f626866732741156c2dea19/crio-f46f7abcba3674441e6c3ac9d1b970bcf1a28147436ad0518447cebdb093b863 WatchSource:0}: Error finding container f46f7abcba3674441e6c3ac9d1b970bcf1a28147436ad0518447cebdb093b863: Status 404 returned error can't find the container with id f46f7abcba3674441e6c3ac9d1b970bcf1a28147436ad0518447cebdb093b863
	Oct 25 10:52:32 old-k8s-version-031983 kubelet[1398]: W1025 10:52:32.185692    1398 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c9e4fcd1d86890ade274f287b767861c437fe06d2f626866732741156c2dea19/crio-a3139e41091fd64d544f67da45d6dfa0aee633014de0faeefcb2128286d970d3 WatchSource:0}: Error finding container a3139e41091fd64d544f67da45d6dfa0aee633014de0faeefcb2128286d970d3: Status 404 returned error can't find the container with id a3139e41091fd64d544f67da45d6dfa0aee633014de0faeefcb2128286d970d3
	Oct 25 10:52:32 old-k8s-version-031983 kubelet[1398]: I1025 10:52:32.448965    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-q597g" podStartSLOduration=1.448910559 podCreationTimestamp="2025-10-25 10:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:52:32.448842767 +0000 UTC m=+13.397367709" watchObservedRunningTime="2025-10-25 10:52:32.448910559 +0000 UTC m=+13.397435484"
	Oct 25 10:52:35 old-k8s-version-031983 kubelet[1398]: I1025 10:52:35.453482    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-2sbx5" podStartSLOduration=1.27988984 podCreationTimestamp="2025-10-25 10:52:31 +0000 UTC" firstStartedPulling="2025-10-25 10:52:32.16706399 +0000 UTC m=+13.115588923" lastFinishedPulling="2025-10-25 10:52:35.340603261 +0000 UTC m=+16.289128186" observedRunningTime="2025-10-25 10:52:35.453186639 +0000 UTC m=+16.401711573" watchObservedRunningTime="2025-10-25 10:52:35.453429103 +0000 UTC m=+16.401954028"
	Oct 25 10:52:46 old-k8s-version-031983 kubelet[1398]: I1025 10:52:46.098234    1398 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 25 10:52:46 old-k8s-version-031983 kubelet[1398]: I1025 10:52:46.133298    1398 topology_manager.go:215] "Topology Admit Handler" podUID="24ce3549-a06c-405e-943d-2982e2ee63de" podNamespace="kube-system" podName="coredns-5dd5756b68-jd2rz"
	Oct 25 10:52:46 old-k8s-version-031983 kubelet[1398]: I1025 10:52:46.141925    1398 topology_manager.go:215] "Topology Admit Handler" podUID="7a27f19d-8bc4-4730-bb35-fd6d4311ef52" podNamespace="kube-system" podName="storage-provisioner"
	Oct 25 10:52:46 old-k8s-version-031983 kubelet[1398]: I1025 10:52:46.156770    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24ce3549-a06c-405e-943d-2982e2ee63de-config-volume\") pod \"coredns-5dd5756b68-jd2rz\" (UID: \"24ce3549-a06c-405e-943d-2982e2ee63de\") " pod="kube-system/coredns-5dd5756b68-jd2rz"
	Oct 25 10:52:46 old-k8s-version-031983 kubelet[1398]: I1025 10:52:46.157005    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ln8n\" (UniqueName: \"kubernetes.io/projected/24ce3549-a06c-405e-943d-2982e2ee63de-kube-api-access-9ln8n\") pod \"coredns-5dd5756b68-jd2rz\" (UID: \"24ce3549-a06c-405e-943d-2982e2ee63de\") " pod="kube-system/coredns-5dd5756b68-jd2rz"
	Oct 25 10:52:46 old-k8s-version-031983 kubelet[1398]: I1025 10:52:46.257374    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7a27f19d-8bc4-4730-bb35-fd6d4311ef52-tmp\") pod \"storage-provisioner\" (UID: \"7a27f19d-8bc4-4730-bb35-fd6d4311ef52\") " pod="kube-system/storage-provisioner"
	Oct 25 10:52:46 old-k8s-version-031983 kubelet[1398]: I1025 10:52:46.257432    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq882\" (UniqueName: \"kubernetes.io/projected/7a27f19d-8bc4-4730-bb35-fd6d4311ef52-kube-api-access-gq882\") pod \"storage-provisioner\" (UID: \"7a27f19d-8bc4-4730-bb35-fd6d4311ef52\") " pod="kube-system/storage-provisioner"
	Oct 25 10:52:46 old-k8s-version-031983 kubelet[1398]: W1025 10:52:46.460580    1398 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c9e4fcd1d86890ade274f287b767861c437fe06d2f626866732741156c2dea19/crio-732768cc83d1a09a25ec39ef0d6eb539ea5de5e03641532256a0f3c0bbc2be04 WatchSource:0}: Error finding container 732768cc83d1a09a25ec39ef0d6eb539ea5de5e03641532256a0f3c0bbc2be04: Status 404 returned error can't find the container with id 732768cc83d1a09a25ec39ef0d6eb539ea5de5e03641532256a0f3c0bbc2be04
	Oct 25 10:52:47 old-k8s-version-031983 kubelet[1398]: I1025 10:52:47.494881    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.494835523 podCreationTimestamp="2025-10-25 10:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:52:47.478138564 +0000 UTC m=+28.426663489" watchObservedRunningTime="2025-10-25 10:52:47.494835523 +0000 UTC m=+28.443360472"
	Oct 25 10:52:49 old-k8s-version-031983 kubelet[1398]: I1025 10:52:49.477379    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-jd2rz" podStartSLOduration=18.477330004 podCreationTimestamp="2025-10-25 10:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:52:47.496856341 +0000 UTC m=+28.445381266" watchObservedRunningTime="2025-10-25 10:52:49.477330004 +0000 UTC m=+30.425854937"
	Oct 25 10:52:49 old-k8s-version-031983 kubelet[1398]: I1025 10:52:49.478074    1398 topology_manager.go:215] "Topology Admit Handler" podUID="3ef55609-5cc4-4fa3-879c-98e876c9ac41" podNamespace="default" podName="busybox"
	Oct 25 10:52:49 old-k8s-version-031983 kubelet[1398]: I1025 10:52:49.578826    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hg6p\" (UniqueName: \"kubernetes.io/projected/3ef55609-5cc4-4fa3-879c-98e876c9ac41-kube-api-access-7hg6p\") pod \"busybox\" (UID: \"3ef55609-5cc4-4fa3-879c-98e876c9ac41\") " pod="default/busybox"
	
	
	==> storage-provisioner [9f6650565c49dd22de46d597f3c9ffee804933c46f21292f30b72173101f89b7] <==
	I1025 10:52:46.538403       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:52:46.562078       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:52:46.562128       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 10:52:46.578986       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:52:46.579237       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-031983_735c9f1c-d883-4580-9e96-592e9d8dc9cc!
	I1025 10:52:46.582720       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0bd6d084-f7ff-4686-9d53-994a26c512ba", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-031983_735c9f1c-d883-4580-9e96-592e9d8dc9cc became leader
	I1025 10:52:46.680080       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-031983_735c9f1c-d883-4580-9e96-592e9d8dc9cc!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-031983 -n old-k8s-version-031983
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-031983 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-031983 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-031983 --alsologtostderr -v=1: exit status 80 (1.875089793s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-031983 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:54:18.079559  443384 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:54:18.079689  443384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:54:18.079702  443384 out.go:374] Setting ErrFile to fd 2...
	I1025 10:54:18.079708  443384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:54:18.080011  443384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:54:18.080402  443384 out.go:368] Setting JSON to false
	I1025 10:54:18.080440  443384 mustload.go:65] Loading cluster: old-k8s-version-031983
	I1025 10:54:18.080867  443384 config.go:182] Loaded profile config "old-k8s-version-031983": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:54:18.081464  443384 cli_runner.go:164] Run: docker container inspect old-k8s-version-031983 --format={{.State.Status}}
	I1025 10:54:18.104085  443384 host.go:66] Checking if "old-k8s-version-031983" exists ...
	I1025 10:54:18.104514  443384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:54:18.175326  443384 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 10:54:18.164783257 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:54:18.176000  443384 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-031983 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 10:54:18.181262  443384 out.go:179] * Pausing node old-k8s-version-031983 ... 
	I1025 10:54:18.184111  443384 host.go:66] Checking if "old-k8s-version-031983" exists ...
	I1025 10:54:18.184475  443384 ssh_runner.go:195] Run: systemctl --version
	I1025 10:54:18.184525  443384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:54:18.202902  443384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/old-k8s-version-031983/id_rsa Username:docker}
	I1025 10:54:18.308831  443384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:54:18.329231  443384 pause.go:52] kubelet running: true
	I1025 10:54:18.329374  443384 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:54:18.593430  443384 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:54:18.593565  443384 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:54:18.673965  443384 cri.go:89] found id: "1b6f7fca84250e969eb92a69cd684590b8335cd2574fefd2cc8d891ee242afd1"
	I1025 10:54:18.674011  443384 cri.go:89] found id: "282bf80083c3f2f30757b4a4e969d8d35b11cd8b2d4a79b5d913e2e97221898b"
	I1025 10:54:18.674016  443384 cri.go:89] found id: "61a0b589a5afa1251610e31cdc2e6f467b5f10034155e9f8f31f35ca1b7206db"
	I1025 10:54:18.674021  443384 cri.go:89] found id: "9fd9a8a5cc009ac96b64da624b784533a1e04c500388713c6f7b31e40b933a8a"
	I1025 10:54:18.674024  443384 cri.go:89] found id: "89d684ba83e381a7a64dfe04e56c3aeb40ac87bb22b331b2ff4e4a21ccbcf692"
	I1025 10:54:18.674031  443384 cri.go:89] found id: "eeeddcccacdfbafcc1ee8c599ecbc7ea9e0d84371cb6d1703687ee61d8bb755f"
	I1025 10:54:18.674035  443384 cri.go:89] found id: "e8ffdbbd81192b725fe1a39da32c0b7c2876d35dcd77f38936cc2fce64d55965"
	I1025 10:54:18.674039  443384 cri.go:89] found id: "06385d2fbad2b260e0ee5f55b8d5cba605e51721e2396456ae2851909149c4a9"
	I1025 10:54:18.674042  443384 cri.go:89] found id: "daa7921a2bd460dd730ca528a51c0a28ebc5238cac5327f709bbab5a00da8e58"
	I1025 10:54:18.674063  443384 cri.go:89] found id: "61207ccd40885efe969607525a97edab12360330c40d8763143e717d0f79d0a0"
	I1025 10:54:18.674076  443384 cri.go:89] found id: "aaf729d4b726bd4ed2dcc9bceb249d54230b6890b628f49f2294833bd5b31249"
	I1025 10:54:18.674079  443384 cri.go:89] found id: ""
	I1025 10:54:18.674129  443384 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:54:18.687679  443384 retry.go:31] will retry after 301.806348ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:54:18Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:54:18.990132  443384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:54:19.005220  443384 pause.go:52] kubelet running: false
	I1025 10:54:19.005398  443384 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:54:19.189104  443384 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:54:19.189245  443384 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:54:19.270790  443384 cri.go:89] found id: "1b6f7fca84250e969eb92a69cd684590b8335cd2574fefd2cc8d891ee242afd1"
	I1025 10:54:19.270819  443384 cri.go:89] found id: "282bf80083c3f2f30757b4a4e969d8d35b11cd8b2d4a79b5d913e2e97221898b"
	I1025 10:54:19.270825  443384 cri.go:89] found id: "61a0b589a5afa1251610e31cdc2e6f467b5f10034155e9f8f31f35ca1b7206db"
	I1025 10:54:19.270829  443384 cri.go:89] found id: "9fd9a8a5cc009ac96b64da624b784533a1e04c500388713c6f7b31e40b933a8a"
	I1025 10:54:19.270833  443384 cri.go:89] found id: "89d684ba83e381a7a64dfe04e56c3aeb40ac87bb22b331b2ff4e4a21ccbcf692"
	I1025 10:54:19.270838  443384 cri.go:89] found id: "eeeddcccacdfbafcc1ee8c599ecbc7ea9e0d84371cb6d1703687ee61d8bb755f"
	I1025 10:54:19.270841  443384 cri.go:89] found id: "e8ffdbbd81192b725fe1a39da32c0b7c2876d35dcd77f38936cc2fce64d55965"
	I1025 10:54:19.270866  443384 cri.go:89] found id: "06385d2fbad2b260e0ee5f55b8d5cba605e51721e2396456ae2851909149c4a9"
	I1025 10:54:19.270874  443384 cri.go:89] found id: "daa7921a2bd460dd730ca528a51c0a28ebc5238cac5327f709bbab5a00da8e58"
	I1025 10:54:19.270887  443384 cri.go:89] found id: "61207ccd40885efe969607525a97edab12360330c40d8763143e717d0f79d0a0"
	I1025 10:54:19.270896  443384 cri.go:89] found id: "aaf729d4b726bd4ed2dcc9bceb249d54230b6890b628f49f2294833bd5b31249"
	I1025 10:54:19.270899  443384 cri.go:89] found id: ""
	I1025 10:54:19.270968  443384 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:54:19.283535  443384 retry.go:31] will retry after 297.171108ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:54:19Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:54:19.581135  443384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:54:19.596217  443384 pause.go:52] kubelet running: false
	I1025 10:54:19.596309  443384 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:54:19.793370  443384 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:54:19.793463  443384 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:54:19.863153  443384 cri.go:89] found id: "1b6f7fca84250e969eb92a69cd684590b8335cd2574fefd2cc8d891ee242afd1"
	I1025 10:54:19.863179  443384 cri.go:89] found id: "282bf80083c3f2f30757b4a4e969d8d35b11cd8b2d4a79b5d913e2e97221898b"
	I1025 10:54:19.863185  443384 cri.go:89] found id: "61a0b589a5afa1251610e31cdc2e6f467b5f10034155e9f8f31f35ca1b7206db"
	I1025 10:54:19.863188  443384 cri.go:89] found id: "9fd9a8a5cc009ac96b64da624b784533a1e04c500388713c6f7b31e40b933a8a"
	I1025 10:54:19.863193  443384 cri.go:89] found id: "89d684ba83e381a7a64dfe04e56c3aeb40ac87bb22b331b2ff4e4a21ccbcf692"
	I1025 10:54:19.863197  443384 cri.go:89] found id: "eeeddcccacdfbafcc1ee8c599ecbc7ea9e0d84371cb6d1703687ee61d8bb755f"
	I1025 10:54:19.863200  443384 cri.go:89] found id: "e8ffdbbd81192b725fe1a39da32c0b7c2876d35dcd77f38936cc2fce64d55965"
	I1025 10:54:19.863203  443384 cri.go:89] found id: "06385d2fbad2b260e0ee5f55b8d5cba605e51721e2396456ae2851909149c4a9"
	I1025 10:54:19.863227  443384 cri.go:89] found id: "daa7921a2bd460dd730ca528a51c0a28ebc5238cac5327f709bbab5a00da8e58"
	I1025 10:54:19.863251  443384 cri.go:89] found id: "61207ccd40885efe969607525a97edab12360330c40d8763143e717d0f79d0a0"
	I1025 10:54:19.863263  443384 cri.go:89] found id: "aaf729d4b726bd4ed2dcc9bceb249d54230b6890b628f49f2294833bd5b31249"
	I1025 10:54:19.863267  443384 cri.go:89] found id: ""
	I1025 10:54:19.863330  443384 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:54:19.880530  443384 out.go:203] 
	W1025 10:54:19.883752  443384 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:54:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:54:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 10:54:19.883773  443384 out.go:285] * 
	* 
	W1025 10:54:19.889264  443384 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 10:54:19.892264  443384 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-031983 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-031983
helpers_test.go:243: (dbg) docker inspect old-k8s-version-031983:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c9e4fcd1d86890ade274f287b767861c437fe06d2f626866732741156c2dea19",
	        "Created": "2025-10-25T10:51:50.262019678Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 441285,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:53:14.065553187Z",
	            "FinishedAt": "2025-10-25T10:53:13.180841674Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/c9e4fcd1d86890ade274f287b767861c437fe06d2f626866732741156c2dea19/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c9e4fcd1d86890ade274f287b767861c437fe06d2f626866732741156c2dea19/hostname",
	        "HostsPath": "/var/lib/docker/containers/c9e4fcd1d86890ade274f287b767861c437fe06d2f626866732741156c2dea19/hosts",
	        "LogPath": "/var/lib/docker/containers/c9e4fcd1d86890ade274f287b767861c437fe06d2f626866732741156c2dea19/c9e4fcd1d86890ade274f287b767861c437fe06d2f626866732741156c2dea19-json.log",
	        "Name": "/old-k8s-version-031983",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-031983:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-031983",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c9e4fcd1d86890ade274f287b767861c437fe06d2f626866732741156c2dea19",
	                "LowerDir": "/var/lib/docker/overlay2/f83171e8b997df441f44209753365f0b1cf2bf8af3f3f60c6899baef6933b87f-init/diff:/var/lib/docker/overlay2/a746402fafa86965bd9394d930bb96ec16982037f9f5e7f4f1de6d09576ff851/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f83171e8b997df441f44209753365f0b1cf2bf8af3f3f60c6899baef6933b87f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f83171e8b997df441f44209753365f0b1cf2bf8af3f3f60c6899baef6933b87f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f83171e8b997df441f44209753365f0b1cf2bf8af3f3f60c6899baef6933b87f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-031983",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-031983/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-031983",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-031983",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-031983",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c6c60a4f97644cb1728be4d0d6b4920511ec60fd91237bfe8de8afd68a822970",
	            "SandboxKey": "/var/run/docker/netns/c6c60a4f9764",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33413"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33414"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33417"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33415"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33416"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-031983": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:12:2a:f3:30:db",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f134955a6205418e30c262ff57f17637fbd69f7510dbb06a7800f5313ba135a3",
	                    "EndpointID": "318510742ef8a3e32b2967af06c6a8e66f17ce18b4d0218a118adf29bcd6c82e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-031983",
	                        "c9e4fcd1d868"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-031983 -n old-k8s-version-031983
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-031983 -n old-k8s-version-031983: exit status 2 (367.86936ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-031983 logs -n 25
E1025 10:54:21.255370  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-031983 logs -n 25: (1.38212535s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-759329 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ ssh     │ -p cilium-759329 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ ssh     │ -p cilium-759329 sudo containerd config dump                                                                                                                                                                                                  │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ ssh     │ -p cilium-759329 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ ssh     │ -p cilium-759329 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ ssh     │ -p cilium-759329 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ ssh     │ -p cilium-759329 sudo crio config                                                                                                                                                                                                             │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ delete  │ -p cilium-759329                                                                                                                                                                                                                              │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:50 UTC │
	│ start   │ -p force-systemd-env-623432 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-623432  │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:50 UTC │
	│ start   │ -p kubernetes-upgrade-291330 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-291330 │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ start   │ -p kubernetes-upgrade-291330 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-291330 │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:50 UTC │
	│ delete  │ -p kubernetes-upgrade-291330                                                                                                                                                                                                                  │ kubernetes-upgrade-291330 │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:50 UTC │
	│ start   │ -p cert-expiration-736062 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-736062    │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:51 UTC │
	│ delete  │ -p force-systemd-env-623432                                                                                                                                                                                                                   │ force-systemd-env-623432  │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:50 UTC │
	│ start   │ -p cert-options-771620 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-771620       │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:51 UTC │
	│ ssh     │ cert-options-771620 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-771620       │ jenkins │ v1.37.0 │ 25 Oct 25 10:51 UTC │ 25 Oct 25 10:51 UTC │
	│ ssh     │ -p cert-options-771620 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-771620       │ jenkins │ v1.37.0 │ 25 Oct 25 10:51 UTC │ 25 Oct 25 10:51 UTC │
	│ delete  │ -p cert-options-771620                                                                                                                                                                                                                        │ cert-options-771620       │ jenkins │ v1.37.0 │ 25 Oct 25 10:51 UTC │ 25 Oct 25 10:51 UTC │
	│ start   │ -p old-k8s-version-031983 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-031983    │ jenkins │ v1.37.0 │ 25 Oct 25 10:51 UTC │ 25 Oct 25 10:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-031983 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-031983    │ jenkins │ v1.37.0 │ 25 Oct 25 10:52 UTC │                     │
	│ stop    │ -p old-k8s-version-031983 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-031983    │ jenkins │ v1.37.0 │ 25 Oct 25 10:53 UTC │ 25 Oct 25 10:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-031983 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-031983    │ jenkins │ v1.37.0 │ 25 Oct 25 10:53 UTC │ 25 Oct 25 10:53 UTC │
	│ start   │ -p old-k8s-version-031983 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-031983    │ jenkins │ v1.37.0 │ 25 Oct 25 10:53 UTC │ 25 Oct 25 10:54 UTC │
	│ image   │ old-k8s-version-031983 image list --format=json                                                                                                                                                                                               │ old-k8s-version-031983    │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:54 UTC │
	│ pause   │ -p old-k8s-version-031983 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-031983    │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:53:13
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:53:13.776842  441160 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:53:13.776970  441160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:53:13.776979  441160 out.go:374] Setting ErrFile to fd 2...
	I1025 10:53:13.776984  441160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:53:13.777249  441160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:53:13.777686  441160 out.go:368] Setting JSON to false
	I1025 10:53:13.778731  441160 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9345,"bootTime":1761380249,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 10:53:13.778807  441160 start.go:141] virtualization:  
	I1025 10:53:13.781946  441160 out.go:179] * [old-k8s-version-031983] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:53:13.785833  441160 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:53:13.785953  441160 notify.go:220] Checking for updates...
	I1025 10:53:13.791792  441160 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:53:13.794602  441160 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:53:13.797539  441160 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 10:53:13.800356  441160 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:53:13.803133  441160 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:53:13.806639  441160 config.go:182] Loaded profile config "old-k8s-version-031983": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:53:13.810297  441160 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1025 10:53:13.813114  441160 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:53:13.835254  441160 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:53:13.835366  441160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:53:13.901849  441160 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:53:13.891753748 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:53:13.901957  441160 docker.go:318] overlay module found
	I1025 10:53:13.905104  441160 out.go:179] * Using the docker driver based on existing profile
	I1025 10:53:13.908100  441160 start.go:305] selected driver: docker
	I1025 10:53:13.908119  441160 start.go:925] validating driver "docker" against &{Name:old-k8s-version-031983 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-031983 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:53:13.908242  441160 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:53:13.909240  441160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:53:13.973035  441160 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:53:13.957469096 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:53:13.973369  441160 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:53:13.973404  441160 cni.go:84] Creating CNI manager for ""
	I1025 10:53:13.973468  441160 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:53:13.973510  441160 start.go:349] cluster config:
	{Name:old-k8s-version-031983 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-031983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:53:13.978549  441160 out.go:179] * Starting "old-k8s-version-031983" primary control-plane node in "old-k8s-version-031983" cluster
	I1025 10:53:13.981520  441160 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:53:13.984382  441160 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:53:13.987175  441160 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 10:53:13.987240  441160 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1025 10:53:13.987253  441160 cache.go:58] Caching tarball of preloaded images
	I1025 10:53:13.987256  441160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:53:13.987353  441160 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:53:13.987363  441160 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1025 10:53:13.987504  441160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/config.json ...
	I1025 10:53:14.009820  441160 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:53:14.009847  441160 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:53:14.009862  441160 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:53:14.009895  441160 start.go:360] acquireMachinesLock for old-k8s-version-031983: {Name:mkea21c13c631a617ed8bc5861a3bc5db7c7a81f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:53:14.009959  441160 start.go:364] duration metric: took 39.64µs to acquireMachinesLock for "old-k8s-version-031983"
	I1025 10:53:14.010015  441160 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:53:14.010028  441160 fix.go:54] fixHost starting: 
	I1025 10:53:14.010305  441160 cli_runner.go:164] Run: docker container inspect old-k8s-version-031983 --format={{.State.Status}}
	I1025 10:53:14.028419  441160 fix.go:112] recreateIfNeeded on old-k8s-version-031983: state=Stopped err=<nil>
	W1025 10:53:14.028453  441160 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:53:14.031746  441160 out.go:252] * Restarting existing docker container for "old-k8s-version-031983" ...
	I1025 10:53:14.031866  441160 cli_runner.go:164] Run: docker start old-k8s-version-031983
	I1025 10:53:14.285508  441160 cli_runner.go:164] Run: docker container inspect old-k8s-version-031983 --format={{.State.Status}}
	I1025 10:53:14.306895  441160 kic.go:430] container "old-k8s-version-031983" state is running.
	I1025 10:53:14.307383  441160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-031983
	I1025 10:53:14.328042  441160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/config.json ...
	I1025 10:53:14.328266  441160 machine.go:93] provisionDockerMachine start ...
	I1025 10:53:14.328335  441160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:53:14.360183  441160 main.go:141] libmachine: Using SSH client type: native
	I1025 10:53:14.360515  441160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1025 10:53:14.360921  441160 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:53:14.362107  441160 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 10:53:17.513564  441160 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-031983
	
	I1025 10:53:17.513596  441160 ubuntu.go:182] provisioning hostname "old-k8s-version-031983"
	I1025 10:53:17.513663  441160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:53:17.539530  441160 main.go:141] libmachine: Using SSH client type: native
	I1025 10:53:17.539855  441160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1025 10:53:17.539874  441160 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-031983 && echo "old-k8s-version-031983" | sudo tee /etc/hostname
	I1025 10:53:17.701661  441160 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-031983
	
	I1025 10:53:17.701752  441160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:53:17.719242  441160 main.go:141] libmachine: Using SSH client type: native
	I1025 10:53:17.719637  441160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1025 10:53:17.719655  441160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-031983' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-031983/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-031983' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:53:17.874581  441160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:53:17.874609  441160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:53:17.874639  441160 ubuntu.go:190] setting up certificates
	I1025 10:53:17.874649  441160 provision.go:84] configureAuth start
	I1025 10:53:17.874721  441160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-031983
	I1025 10:53:17.892842  441160 provision.go:143] copyHostCerts
	I1025 10:53:17.893116  441160 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:53:17.893141  441160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:53:17.893226  441160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:53:17.893347  441160 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:53:17.893352  441160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:53:17.893378  441160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:53:17.893438  441160 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:53:17.893443  441160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:53:17.893466  441160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:53:17.893566  441160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-031983 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-031983]
	I1025 10:53:18.342720  441160 provision.go:177] copyRemoteCerts
	I1025 10:53:18.342788  441160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:53:18.342831  441160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:53:18.359622  441160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/old-k8s-version-031983/id_rsa Username:docker}
	I1025 10:53:18.465906  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:53:18.484763  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1025 10:53:18.502444  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:53:18.520965  441160 provision.go:87] duration metric: took 646.291938ms to configureAuth
	I1025 10:53:18.521034  441160 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:53:18.521313  441160 config.go:182] Loaded profile config "old-k8s-version-031983": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:53:18.521463  441160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:53:18.540474  441160 main.go:141] libmachine: Using SSH client type: native
	I1025 10:53:18.540778  441160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1025 10:53:18.540796  441160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:53:18.855887  441160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:53:18.855963  441160 machine.go:96] duration metric: took 4.527687332s to provisionDockerMachine
	I1025 10:53:18.855988  441160 start.go:293] postStartSetup for "old-k8s-version-031983" (driver="docker")
	I1025 10:53:18.856035  441160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:53:18.856135  441160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:53:18.856212  441160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:53:18.877597  441160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/old-k8s-version-031983/id_rsa Username:docker}
	I1025 10:53:18.982150  441160 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:53:18.985524  441160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:53:18.985555  441160 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:53:18.985567  441160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:53:18.985626  441160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:53:18.985714  441160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:53:18.985823  441160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:53:18.993374  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:53:19.018526  441160 start.go:296] duration metric: took 162.502251ms for postStartSetup
	I1025 10:53:19.018613  441160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:53:19.018677  441160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:53:19.036887  441160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/old-k8s-version-031983/id_rsa Username:docker}
	I1025 10:53:19.140060  441160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:53:19.144987  441160 fix.go:56] duration metric: took 5.134949577s for fixHost
	I1025 10:53:19.145016  441160 start.go:83] releasing machines lock for "old-k8s-version-031983", held for 5.135045012s
	I1025 10:53:19.145097  441160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-031983
	I1025 10:53:19.162686  441160 ssh_runner.go:195] Run: cat /version.json
	I1025 10:53:19.162756  441160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:53:19.163042  441160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:53:19.163104  441160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:53:19.181116  441160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/old-k8s-version-031983/id_rsa Username:docker}
	I1025 10:53:19.183991  441160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/old-k8s-version-031983/id_rsa Username:docker}
	I1025 10:53:19.285962  441160 ssh_runner.go:195] Run: systemctl --version
	I1025 10:53:19.377462  441160 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:53:19.415533  441160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:53:19.420136  441160 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:53:19.420261  441160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:53:19.428441  441160 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:53:19.428466  441160 start.go:495] detecting cgroup driver to use...
	I1025 10:53:19.428500  441160 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:53:19.428566  441160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:53:19.444790  441160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:53:19.458431  441160 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:53:19.458535  441160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:53:19.475152  441160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:53:19.488965  441160 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:53:19.629519  441160 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:53:19.754915  441160 docker.go:234] disabling docker service ...
	I1025 10:53:19.755048  441160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:53:19.770231  441160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:53:19.783993  441160 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:53:19.907961  441160 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:53:20.033332  441160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:53:20.048145  441160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:53:20.064786  441160 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1025 10:53:20.064870  441160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:53:20.074982  441160 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:53:20.075063  441160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:53:20.084852  441160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:53:20.094420  441160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:53:20.105438  441160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:53:20.114080  441160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:53:20.123547  441160 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:53:20.132752  441160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:53:20.141787  441160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:53:20.149651  441160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:53:20.157831  441160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:53:20.271455  441160 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:53:20.403967  441160 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:53:20.404047  441160 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:53:20.407916  441160 start.go:563] Will wait 60s for crictl version
	I1025 10:53:20.407979  441160 ssh_runner.go:195] Run: which crictl
	I1025 10:53:20.414454  441160 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:53:20.440643  441160 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:53:20.440803  441160 ssh_runner.go:195] Run: crio --version
	I1025 10:53:20.470864  441160 ssh_runner.go:195] Run: crio --version
	I1025 10:53:20.504716  441160 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1025 10:53:20.507717  441160 cli_runner.go:164] Run: docker network inspect old-k8s-version-031983 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:53:20.524165  441160 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 10:53:20.529642  441160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:53:20.539902  441160 kubeadm.go:883] updating cluster {Name:old-k8s-version-031983 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-031983 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:53:20.540023  441160 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 10:53:20.540075  441160 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:53:20.577921  441160 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:53:20.577951  441160 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:53:20.578039  441160 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:53:20.607425  441160 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:53:20.607447  441160 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:53:20.607454  441160 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1025 10:53:20.607589  441160 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-031983 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-031983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:53:20.607677  441160 ssh_runner.go:195] Run: crio config
	I1025 10:53:20.667448  441160 cni.go:84] Creating CNI manager for ""
	I1025 10:53:20.667473  441160 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:53:20.667496  441160 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:53:20.667546  441160 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-031983 NodeName:old-k8s-version-031983 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:53:20.667712  441160 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-031983"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:53:20.667791  441160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1025 10:53:20.675659  441160 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:53:20.675824  441160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:53:20.683822  441160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1025 10:53:20.696966  441160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:53:20.711474  441160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1025 10:53:20.725500  441160 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:53:20.729258  441160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:53:20.739184  441160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:53:20.860755  441160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:53:20.877971  441160 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983 for IP: 192.168.85.2
	I1025 10:53:20.878002  441160 certs.go:195] generating shared ca certs ...
	I1025 10:53:20.878025  441160 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:53:20.878226  441160 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:53:20.878301  441160 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:53:20.878318  441160 certs.go:257] generating profile certs ...
	I1025 10:53:20.878422  441160 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/client.key
	I1025 10:53:20.878516  441160 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/apiserver.key.11393817
	I1025 10:53:20.878589  441160 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/proxy-client.key
	I1025 10:53:20.878719  441160 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:53:20.878779  441160 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:53:20.878795  441160 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:53:20.878836  441160 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:53:20.878883  441160 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:53:20.878916  441160 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:53:20.878979  441160 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:53:20.879566  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:53:20.904000  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:53:20.924305  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:53:20.947424  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:53:20.973000  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1025 10:53:21.004236  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 10:53:21.030183  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:53:21.055404  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:53:21.080432  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:53:21.101890  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:53:21.123491  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:53:21.143051  441160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:53:21.158270  441160 ssh_runner.go:195] Run: openssl version
	I1025 10:53:21.164885  441160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:53:21.177311  441160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:53:21.181467  441160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:53:21.181535  441160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:53:21.223424  441160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:53:21.231235  441160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:53:21.239519  441160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:53:21.243170  441160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:53:21.243236  441160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:53:21.287344  441160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:53:21.295421  441160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:53:21.303898  441160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:53:21.308007  441160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:53:21.308106  441160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:53:21.354044  441160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:53:21.362267  441160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:53:21.366180  441160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:53:21.407371  441160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:53:21.453469  441160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:53:21.501020  441160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:53:21.578577  441160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:53:21.639140  441160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:53:21.736500  441160 kubeadm.go:400] StartCluster: {Name:old-k8s-version-031983 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-031983 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:53:21.736618  441160 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:53:21.736741  441160 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:53:21.796515  441160 cri.go:89] found id: "eeeddcccacdfbafcc1ee8c599ecbc7ea9e0d84371cb6d1703687ee61d8bb755f"
	I1025 10:53:21.796538  441160 cri.go:89] found id: "e8ffdbbd81192b725fe1a39da32c0b7c2876d35dcd77f38936cc2fce64d55965"
	I1025 10:53:21.796545  441160 cri.go:89] found id: "06385d2fbad2b260e0ee5f55b8d5cba605e51721e2396456ae2851909149c4a9"
	I1025 10:53:21.796559  441160 cri.go:89] found id: "daa7921a2bd460dd730ca528a51c0a28ebc5238cac5327f709bbab5a00da8e58"
	I1025 10:53:21.796589  441160 cri.go:89] found id: ""
	I1025 10:53:21.796646  441160 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:53:21.811344  441160 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:53:21Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:53:21.811475  441160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:53:21.826114  441160 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:53:21.826135  441160 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:53:21.826231  441160 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:53:21.840225  441160 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:53:21.840920  441160 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-031983" does not appear in /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:53:21.841290  441160 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-259409/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-031983" cluster setting kubeconfig missing "old-k8s-version-031983" context setting]
	I1025 10:53:21.841859  441160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:53:21.843928  441160 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:53:21.858029  441160 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1025 10:53:21.858121  441160 kubeadm.go:601] duration metric: took 31.923265ms to restartPrimaryControlPlane
	I1025 10:53:21.858137  441160 kubeadm.go:402] duration metric: took 121.646875ms to StartCluster
	I1025 10:53:21.858168  441160 settings.go:142] acquiring lock: {Name:mk78071aea90144fb2e8f63b90de4792d7a3522a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:53:21.858260  441160 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:53:21.859358  441160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:53:21.859643  441160 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:53:21.860102  441160 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:53:21.860183  441160 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-031983"
	I1025 10:53:21.860204  441160 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-031983"
	I1025 10:53:21.860204  441160 config.go:182] Loaded profile config "old-k8s-version-031983": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	W1025 10:53:21.860211  441160 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:53:21.860235  441160 host.go:66] Checking if "old-k8s-version-031983" exists ...
	I1025 10:53:21.860280  441160 addons.go:69] Setting dashboard=true in profile "old-k8s-version-031983"
	I1025 10:53:21.860301  441160 addons.go:238] Setting addon dashboard=true in "old-k8s-version-031983"
	W1025 10:53:21.860307  441160 addons.go:247] addon dashboard should already be in state true
	I1025 10:53:21.860326  441160 host.go:66] Checking if "old-k8s-version-031983" exists ...
	I1025 10:53:21.860684  441160 cli_runner.go:164] Run: docker container inspect old-k8s-version-031983 --format={{.State.Status}}
	I1025 10:53:21.860973  441160 cli_runner.go:164] Run: docker container inspect old-k8s-version-031983 --format={{.State.Status}}
	I1025 10:53:21.865942  441160 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-031983"
	I1025 10:53:21.866263  441160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-031983"
	I1025 10:53:21.866169  441160 out.go:179] * Verifying Kubernetes components...
	I1025 10:53:21.867362  441160 cli_runner.go:164] Run: docker container inspect old-k8s-version-031983 --format={{.State.Status}}
	I1025 10:53:21.871459  441160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:53:21.923079  441160 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:53:21.923228  441160 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:53:21.926173  441160 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:53:21.926198  441160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:53:21.926274  441160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:53:21.930969  441160 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 10:53:21.933959  441160 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:53:21.934238  441160 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:53:21.934325  441160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:53:21.945392  441160 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-031983"
	W1025 10:53:21.945415  441160 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:53:21.945439  441160 host.go:66] Checking if "old-k8s-version-031983" exists ...
	I1025 10:53:21.945875  441160 cli_runner.go:164] Run: docker container inspect old-k8s-version-031983 --format={{.State.Status}}
	I1025 10:53:22.003066  441160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/old-k8s-version-031983/id_rsa Username:docker}
	I1025 10:53:22.018521  441160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/old-k8s-version-031983/id_rsa Username:docker}
	I1025 10:53:22.027629  441160 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:53:22.027650  441160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:53:22.027722  441160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:53:22.056502  441160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/old-k8s-version-031983/id_rsa Username:docker}
	I1025 10:53:22.260179  441160 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:53:22.260256  441160 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:53:22.272828  441160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:53:22.313691  441160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:53:22.318622  441160 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:53:22.318646  441160 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:53:22.320449  441160 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-031983" to be "Ready" ...
	I1025 10:53:22.361045  441160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:53:22.379108  441160 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:53:22.379133  441160 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:53:22.444162  441160 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:53:22.444186  441160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:53:22.514329  441160 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:53:22.514355  441160 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:53:22.580443  441160 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:53:22.580470  441160 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:53:22.640026  441160 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:53:22.640051  441160 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:53:22.689928  441160 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:53:22.689954  441160 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:53:22.714220  441160 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:53:22.714244  441160 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:53:22.733190  441160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:53:26.820977  441160 node_ready.go:49] node "old-k8s-version-031983" is "Ready"
	I1025 10:53:26.821009  441160 node_ready.go:38] duration metric: took 4.500524839s for node "old-k8s-version-031983" to be "Ready" ...
	I1025 10:53:26.821031  441160 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:53:26.821090  441160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:53:28.601887  441160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.288147885s)
	I1025 10:53:28.602065  441160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.240995491s)
	I1025 10:53:29.137655  441160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.40441892s)
	I1025 10:53:29.137849  441160 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.316737651s)
	I1025 10:53:29.137879  441160 api_server.go:72] duration metric: took 7.27820138s to wait for apiserver process to appear ...
	I1025 10:53:29.137886  441160 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:53:29.137902  441160 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 10:53:29.140866  441160 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-031983 addons enable metrics-server
	
	I1025 10:53:29.143856  441160 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1025 10:53:29.146773  441160 addons.go:514] duration metric: took 7.286666082s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1025 10:53:29.148550  441160 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 10:53:29.150121  441160 api_server.go:141] control plane version: v1.28.0
	I1025 10:53:29.150154  441160 api_server.go:131] duration metric: took 12.259485ms to wait for apiserver health ...
	I1025 10:53:29.150164  441160 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:53:29.155275  441160 system_pods.go:59] 8 kube-system pods found
	I1025 10:53:29.155328  441160 system_pods.go:61] "coredns-5dd5756b68-jd2rz" [24ce3549-a06c-405e-943d-2982e2ee63de] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:53:29.155339  441160 system_pods.go:61] "etcd-old-k8s-version-031983" [7afeb15f-fef7-4c88-ba96-7cd4bd24b4a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:53:29.155350  441160 system_pods.go:61] "kindnet-2sbx5" [b129bb16-e936-4865-b06a-a71756a88fa9] Running
	I1025 10:53:29.155358  441160 system_pods.go:61] "kube-apiserver-old-k8s-version-031983" [9a0fa9ff-e383-482b-9217-5089637f3579] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:53:29.155372  441160 system_pods.go:61] "kube-controller-manager-old-k8s-version-031983" [1e037440-e4e2-4392-8c8d-ac2bcceb2723] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:53:29.155378  441160 system_pods.go:61] "kube-proxy-q597g" [21cc5901-1ab1-495b-9b85-3812b03b4ddc] Running
	I1025 10:53:29.155387  441160 system_pods.go:61] "kube-scheduler-old-k8s-version-031983" [e163349f-3264-496f-b34f-7ad2a108c7fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:53:29.155405  441160 system_pods.go:61] "storage-provisioner" [7a27f19d-8bc4-4730-bb35-fd6d4311ef52] Running
	I1025 10:53:29.155418  441160 system_pods.go:74] duration metric: took 5.248057ms to wait for pod list to return data ...
	I1025 10:53:29.155426  441160 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:53:29.158044  441160 default_sa.go:45] found service account: "default"
	I1025 10:53:29.158082  441160 default_sa.go:55] duration metric: took 2.650174ms for default service account to be created ...
	I1025 10:53:29.158092  441160 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:53:29.161800  441160 system_pods.go:86] 8 kube-system pods found
	I1025 10:53:29.161846  441160 system_pods.go:89] "coredns-5dd5756b68-jd2rz" [24ce3549-a06c-405e-943d-2982e2ee63de] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:53:29.161858  441160 system_pods.go:89] "etcd-old-k8s-version-031983" [7afeb15f-fef7-4c88-ba96-7cd4bd24b4a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:53:29.161873  441160 system_pods.go:89] "kindnet-2sbx5" [b129bb16-e936-4865-b06a-a71756a88fa9] Running
	I1025 10:53:29.161880  441160 system_pods.go:89] "kube-apiserver-old-k8s-version-031983" [9a0fa9ff-e383-482b-9217-5089637f3579] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:53:29.161896  441160 system_pods.go:89] "kube-controller-manager-old-k8s-version-031983" [1e037440-e4e2-4392-8c8d-ac2bcceb2723] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:53:29.161913  441160 system_pods.go:89] "kube-proxy-q597g" [21cc5901-1ab1-495b-9b85-3812b03b4ddc] Running
	I1025 10:53:29.161925  441160 system_pods.go:89] "kube-scheduler-old-k8s-version-031983" [e163349f-3264-496f-b34f-7ad2a108c7fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:53:29.161929  441160 system_pods.go:89] "storage-provisioner" [7a27f19d-8bc4-4730-bb35-fd6d4311ef52] Running
	I1025 10:53:29.161936  441160 system_pods.go:126] duration metric: took 3.839026ms to wait for k8s-apps to be running ...
	I1025 10:53:29.161950  441160 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:53:29.162061  441160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:53:29.189567  441160 system_svc.go:56] duration metric: took 27.60832ms WaitForService to wait for kubelet
	I1025 10:53:29.189610  441160 kubeadm.go:586] duration metric: took 7.32993012s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:53:29.189630  441160 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:53:29.192906  441160 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:53:29.192955  441160 node_conditions.go:123] node cpu capacity is 2
	I1025 10:53:29.192968  441160 node_conditions.go:105] duration metric: took 3.33274ms to run NodePressure ...
	I1025 10:53:29.192982  441160 start.go:241] waiting for startup goroutines ...
	I1025 10:53:29.192989  441160 start.go:246] waiting for cluster config update ...
	I1025 10:53:29.193000  441160 start.go:255] writing updated cluster config ...
	I1025 10:53:29.193324  441160 ssh_runner.go:195] Run: rm -f paused
	I1025 10:53:29.196992  441160 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:53:29.201275  441160 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-jd2rz" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:53:31.207741  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:53:33.207867  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:53:35.707524  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:53:37.708750  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:53:40.210587  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:53:42.708348  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:53:45.214214  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:53:47.712464  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:53:50.207287  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:53:52.207728  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:53:54.707389  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:53:56.707660  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:53:59.207035  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:54:01.207818  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:54:03.706687  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	I1025 10:54:04.708791  441160 pod_ready.go:94] pod "coredns-5dd5756b68-jd2rz" is "Ready"
	I1025 10:54:04.708821  441160 pod_ready.go:86] duration metric: took 35.507517209s for pod "coredns-5dd5756b68-jd2rz" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:54:04.711896  441160 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-031983" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:54:04.716999  441160 pod_ready.go:94] pod "etcd-old-k8s-version-031983" is "Ready"
	I1025 10:54:04.717025  441160 pod_ready.go:86] duration metric: took 5.103374ms for pod "etcd-old-k8s-version-031983" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:54:04.720524  441160 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-031983" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:54:04.725413  441160 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-031983" is "Ready"
	I1025 10:54:04.725438  441160 pod_ready.go:86] duration metric: took 4.889776ms for pod "kube-apiserver-old-k8s-version-031983" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:54:04.728841  441160 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-031983" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:54:04.905635  441160 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-031983" is "Ready"
	I1025 10:54:04.905719  441160 pod_ready.go:86] duration metric: took 176.851299ms for pod "kube-controller-manager-old-k8s-version-031983" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:54:05.110811  441160 pod_ready.go:83] waiting for pod "kube-proxy-q597g" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:54:05.505877  441160 pod_ready.go:94] pod "kube-proxy-q597g" is "Ready"
	I1025 10:54:05.505909  441160 pod_ready.go:86] duration metric: took 395.01743ms for pod "kube-proxy-q597g" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:54:05.706063  441160 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-031983" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:54:06.106840  441160 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-031983" is "Ready"
	I1025 10:54:06.106873  441160 pod_ready.go:86] duration metric: took 400.779419ms for pod "kube-scheduler-old-k8s-version-031983" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:54:06.106886  441160 pod_ready.go:40] duration metric: took 36.909861799s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:54:06.166539  441160 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1025 10:54:06.169868  441160 out.go:203] 
	W1025 10:54:06.173017  441160 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1025 10:54:06.176032  441160 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1025 10:54:06.178986  441160 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-031983" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 10:54:05 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:05.073355897Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:54:05 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:05.080955434Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:54:05 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:05.08152947Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:54:05 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:05.099463985Z" level=info msg="Created container 61207ccd40885efe969607525a97edab12360330c40d8763143e717d0f79d0a0: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wc9hh/dashboard-metrics-scraper" id=0af1547e-d731-4642-9c85-a72aabb3dc0b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:54:05 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:05.100622396Z" level=info msg="Starting container: 61207ccd40885efe969607525a97edab12360330c40d8763143e717d0f79d0a0" id=f1c528fa-8c14-47df-bc4c-424d6c7544ad name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:54:05 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:05.103301493Z" level=info msg="Started container" PID=1646 containerID=61207ccd40885efe969607525a97edab12360330c40d8763143e717d0f79d0a0 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wc9hh/dashboard-metrics-scraper id=f1c528fa-8c14-47df-bc4c-424d6c7544ad name=/runtime.v1.RuntimeService/StartContainer sandboxID=bc0276c98573e2b947a5b228fad1c038cc4923b08bc9c332a68c1d6ce9eea2ee
	Oct 25 10:54:05 old-k8s-version-031983 conmon[1644]: conmon 61207ccd40885efe9696 <ninfo>: container 1646 exited with status 1
	Oct 25 10:54:05 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:05.309866829Z" level=info msg="Removing container: c799662f95ab077b9c2fb16a5af68d24c6fbc74374fbb205d03b6262a002ff9e" id=e4f48877-5ac2-4110-8baa-94ad75430e60 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:54:05 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:05.318275201Z" level=info msg="Error loading conmon cgroup of container c799662f95ab077b9c2fb16a5af68d24c6fbc74374fbb205d03b6262a002ff9e: cgroup deleted" id=e4f48877-5ac2-4110-8baa-94ad75430e60 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:54:05 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:05.323000283Z" level=info msg="Removed container c799662f95ab077b9c2fb16a5af68d24c6fbc74374fbb205d03b6262a002ff9e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wc9hh/dashboard-metrics-scraper" id=e4f48877-5ac2-4110-8baa-94ad75430e60 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.942810758Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.947661313Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.947703274Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.94772588Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.951161825Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.951198297Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.951223077Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.955693641Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.955732222Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.95576181Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.959428847Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.959464211Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.959488572Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.962868427Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.962905465Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	61207ccd40885       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   2                   bc0276c98573e       dashboard-metrics-scraper-5f989dc9cf-wc9hh       kubernetes-dashboard
	1b6f7fca84250       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago      Running             storage-provisioner         2                   4b19f19b0f6c2       storage-provisioner                              kube-system
	aaf729d4b726b       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   31 seconds ago      Running             kubernetes-dashboard        0                   45d30537cf423       kubernetes-dashboard-8694d4445c-d2zpz            kubernetes-dashboard
	282bf80083c3f       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           53 seconds ago      Running             coredns                     1                   b2ef1972039f0       coredns-5dd5756b68-jd2rz                         kube-system
	8018aae064ea4       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago      Running             busybox                     1                   60cb06078bfbf       busybox                                          default
	61a0b589a5afa       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago      Running             kindnet-cni                 1                   7bf495c1e29a8       kindnet-2sbx5                                    kube-system
	9fd9a8a5cc009       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           53 seconds ago      Running             kube-proxy                  1                   4a3a44f303132       kube-proxy-q597g                                 kube-system
	89d684ba83e38       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago      Exited              storage-provisioner         1                   4b19f19b0f6c2       storage-provisioner                              kube-system
	eeeddcccacdfb       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           59 seconds ago      Running             kube-apiserver              1                   650bb121db0bf       kube-apiserver-old-k8s-version-031983            kube-system
	e8ffdbbd81192       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           59 seconds ago      Running             etcd                        1                   371bcd9ffbd93       etcd-old-k8s-version-031983                      kube-system
	06385d2fbad2b       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           59 seconds ago      Running             kube-scheduler              1                   3f873df199b4e       kube-scheduler-old-k8s-version-031983            kube-system
	daa7921a2bd46       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           59 seconds ago      Running             kube-controller-manager     1                   7e360217702e4       kube-controller-manager-old-k8s-version-031983   kube-system
	
	
	==> coredns [282bf80083c3f2f30757b4a4e969d8d35b11cd8b2d4a79b5d913e2e97221898b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34008 - 20101 "HINFO IN 8084456919790498261.6056145886745122556. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015860437s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-031983
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-031983
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=old-k8s-version-031983
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_52_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:52:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-031983
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:54:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:53:57 +0000   Sat, 25 Oct 2025 10:52:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:53:57 +0000   Sat, 25 Oct 2025 10:52:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:53:57 +0000   Sat, 25 Oct 2025 10:52:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:53:57 +0000   Sat, 25 Oct 2025 10:52:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-031983
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                d37866a6-3d06-4c4f-bdc5-afc6ab378351
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-5dd5756b68-jd2rz                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     110s
	  kube-system                 etcd-old-k8s-version-031983                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m2s
	  kube-system                 kindnet-2sbx5                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-old-k8s-version-031983             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-old-k8s-version-031983    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-q597g                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-old-k8s-version-031983             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-wc9hh        0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-d2zpz             0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 108s                   kube-proxy       
	  Normal  Starting                 53s                    kube-proxy       
	  Normal  Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-031983 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-031983 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-031983 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m2s                   kubelet          Node old-k8s-version-031983 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m2s                   kubelet          Node old-k8s-version-031983 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m2s                   kubelet          Node old-k8s-version-031983 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m2s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s                   node-controller  Node old-k8s-version-031983 event: Registered Node old-k8s-version-031983 in Controller
	  Normal  NodeReady                95s                    kubelet          Node old-k8s-version-031983 status is now: NodeReady
	  Normal  Starting                 61s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)      kubelet          Node old-k8s-version-031983 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)      kubelet          Node old-k8s-version-031983 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)      kubelet          Node old-k8s-version-031983 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s                    node-controller  Node old-k8s-version-031983 event: Registered Node old-k8s-version-031983 in Controller
	
	
	==> dmesg <==
	[Oct25 10:25] overlayfs: idmapped layers are currently not supported
	[Oct25 10:30] overlayfs: idmapped layers are currently not supported
	[Oct25 10:31] overlayfs: idmapped layers are currently not supported
	[Oct25 10:32] overlayfs: idmapped layers are currently not supported
	[Oct25 10:33] overlayfs: idmapped layers are currently not supported
	[Oct25 10:34] overlayfs: idmapped layers are currently not supported
	[Oct25 10:36] overlayfs: idmapped layers are currently not supported
	[ +23.146409] overlayfs: idmapped layers are currently not supported
	[ +12.090113] overlayfs: idmapped layers are currently not supported
	[Oct25 10:37] overlayfs: idmapped layers are currently not supported
	[ +24.992751] overlayfs: idmapped layers are currently not supported
	[Oct25 10:38] overlayfs: idmapped layers are currently not supported
	[Oct25 10:39] overlayfs: idmapped layers are currently not supported
	[Oct25 10:40] overlayfs: idmapped layers are currently not supported
	[Oct25 10:41] overlayfs: idmapped layers are currently not supported
	[Oct25 10:43] overlayfs: idmapped layers are currently not supported
	[Oct25 10:45] overlayfs: idmapped layers are currently not supported
	[ +32.163680] overlayfs: idmapped layers are currently not supported
	[Oct25 10:47] overlayfs: idmapped layers are currently not supported
	[Oct25 10:49] overlayfs: idmapped layers are currently not supported
	[Oct25 10:50] overlayfs: idmapped layers are currently not supported
	[Oct25 10:51] overlayfs: idmapped layers are currently not supported
	[  +5.236078] overlayfs: idmapped layers are currently not supported
	[Oct25 10:52] overlayfs: idmapped layers are currently not supported
	[Oct25 10:53] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e8ffdbbd81192b725fe1a39da32c0b7c2876d35dcd77f38936cc2fce64d55965] <==
	{"level":"info","ts":"2025-10-25T10:53:22.087578Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-25T10:53:22.073022Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-10-25T10:53:22.073174Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T10:53:22.087655Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T10:53:22.087672Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T10:53:22.087216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-25T10:53:22.088314Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-25T10:53:22.088635Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T10:53:22.08867Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T10:53:22.072916Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-25T10:53:22.088897Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-25T10:53:23.174039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-25T10:53:23.174152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-25T10:53:23.174209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-25T10:53:23.174253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-25T10:53:23.174289Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-25T10:53:23.174333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-25T10:53:23.174363Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-25T10:53:23.18075Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-031983 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-25T10:53:23.180975Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T10:53:23.181617Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T10:53:23.183777Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-25T10:53:23.182004Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-25T10:53:23.190609Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-25T10:53:23.19069Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:54:21 up  2:36,  0 user,  load average: 2.09, 3.27, 2.77
	Linux old-k8s-version-031983 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [61a0b589a5afa1251610e31cdc2e6f467b5f10034155e9f8f31f35ca1b7206db] <==
	I1025 10:53:27.749399       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:53:27.749674       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:53:27.749799       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:53:27.749809       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:53:27.749822       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:53:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:53:27.946586       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:53:27.946617       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:53:27.946626       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:53:27.946950       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:53:57.941340       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 10:53:57.946928       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 10:53:57.946928       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 10:53:57.948241       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1025 10:53:59.346786       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:53:59.346817       1 metrics.go:72] Registering metrics
	I1025 10:53:59.346888       1 controller.go:711] "Syncing nftables rules"
	I1025 10:54:07.942491       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:54:07.942558       1 main.go:301] handling current node
	I1025 10:54:17.948026       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:54:17.948060       1 main.go:301] handling current node
	
	
	==> kube-apiserver [eeeddcccacdfbafcc1ee8c599ecbc7ea9e0d84371cb6d1703687ee61d8bb755f] <==
	I1025 10:53:26.846590       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1025 10:53:26.851681       1 shared_informer.go:318] Caches are synced for configmaps
	I1025 10:53:26.856309       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:53:26.880766       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1025 10:53:26.882209       1 aggregator.go:166] initial CRD sync complete...
	I1025 10:53:26.882269       1 autoregister_controller.go:141] Starting autoregister controller
	I1025 10:53:26.882277       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:53:26.882285       1 cache.go:39] Caches are synced for autoregister controller
	E1025 10:53:26.913818       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 10:53:26.934769       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1025 10:53:26.934860       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1025 10:53:26.935026       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1025 10:53:26.947219       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1025 10:53:27.484532       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:53:28.897430       1 controller.go:624] quota admission added evaluator for: namespaces
	I1025 10:53:28.965033       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1025 10:53:28.995825       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:53:29.009444       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:53:29.021367       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1025 10:53:29.107928       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.131.130"}
	I1025 10:53:29.129704       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.52.231"}
	I1025 10:53:39.359201       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:53:39.460809       1 controller.go:624] quota admission added evaluator for: endpoints
	I1025 10:53:39.460808       1 controller.go:624] quota admission added evaluator for: endpoints
	I1025 10:53:39.559508       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [daa7921a2bd460dd730ca528a51c0a28ebc5238cac5327f709bbab5a00da8e58] <==
	I1025 10:53:39.417786       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="362.353959ms"
	I1025 10:53:39.417880       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.95µs"
	I1025 10:53:39.565025       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1025 10:53:39.571017       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1025 10:53:39.591798       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-wc9hh"
	I1025 10:53:39.597840       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 10:53:39.598855       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-d2zpz"
	I1025 10:53:39.608746       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="43.765318ms"
	I1025 10:53:39.616779       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="47.247098ms"
	I1025 10:53:39.636608       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="19.771693ms"
	I1025 10:53:39.636875       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="50.955µs"
	I1025 10:53:39.638920       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="29.787992ms"
	I1025 10:53:39.639023       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="59.693µs"
	I1025 10:53:39.648891       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 10:53:39.648973       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1025 10:53:39.658549       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="62.77µs"
	I1025 10:53:44.237138       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="45.67µs"
	I1025 10:53:45.258924       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="116.005µs"
	I1025 10:53:46.255083       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.876µs"
	I1025 10:53:50.281419       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="17.512752ms"
	I1025 10:53:50.281626       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="84.776µs"
	I1025 10:54:04.234660       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.822945ms"
	I1025 10:54:04.235056       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="57.912µs"
	I1025 10:54:05.333027       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.398µs"
	I1025 10:54:09.933130       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.284µs"
	
	
	==> kube-proxy [9fd9a8a5cc009ac96b64da624b784533a1e04c500388713c6f7b31e40b933a8a] <==
	I1025 10:53:27.842623       1 server_others.go:69] "Using iptables proxy"
	I1025 10:53:27.870350       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1025 10:53:27.896956       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:53:27.898733       1 server_others.go:152] "Using iptables Proxier"
	I1025 10:53:27.898826       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1025 10:53:27.898861       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1025 10:53:27.898934       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 10:53:27.899178       1 server.go:846] "Version info" version="v1.28.0"
	I1025 10:53:27.899386       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:53:27.900075       1 config.go:188] "Starting service config controller"
	I1025 10:53:27.900166       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 10:53:27.900214       1 config.go:97] "Starting endpoint slice config controller"
	I1025 10:53:27.900243       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 10:53:27.902774       1 config.go:315] "Starting node config controller"
	I1025 10:53:27.902841       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 10:53:28.000373       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1025 10:53:28.000427       1 shared_informer.go:318] Caches are synced for service config
	I1025 10:53:28.006252       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [06385d2fbad2b260e0ee5f55b8d5cba605e51721e2396456ae2851909149c4a9] <==
	I1025 10:53:26.057769       1 serving.go:348] Generated self-signed cert in-memory
	I1025 10:53:27.239614       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1025 10:53:27.239719       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:53:27.257832       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1025 10:53:27.258557       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1025 10:53:27.258602       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1025 10:53:27.270592       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1025 10:53:27.258619       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:53:27.258628       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:53:27.273444       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1025 10:53:27.271031       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 10:53:27.373518       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1025 10:53:27.378362       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 10:53:27.379508       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Oct 25 10:53:39 old-k8s-version-031983 kubelet[776]: I1025 10:53:39.713719     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8sgz\" (UniqueName: \"kubernetes.io/projected/51ca2d1c-c8a7-4086-835f-2942a08f2e9d-kube-api-access-g8sgz\") pod \"dashboard-metrics-scraper-5f989dc9cf-wc9hh\" (UID: \"51ca2d1c-c8a7-4086-835f-2942a08f2e9d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wc9hh"
	Oct 25 10:53:39 old-k8s-version-031983 kubelet[776]: I1025 10:53:39.713789     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/51ca2d1c-c8a7-4086-835f-2942a08f2e9d-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-wc9hh\" (UID: \"51ca2d1c-c8a7-4086-835f-2942a08f2e9d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wc9hh"
	Oct 25 10:53:39 old-k8s-version-031983 kubelet[776]: I1025 10:53:39.713824     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdt22\" (UniqueName: \"kubernetes.io/projected/540dc871-78a9-4dd4-adb6-ae9d0481d23c-kube-api-access-mdt22\") pod \"kubernetes-dashboard-8694d4445c-d2zpz\" (UID: \"540dc871-78a9-4dd4-adb6-ae9d0481d23c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-d2zpz"
	Oct 25 10:53:39 old-k8s-version-031983 kubelet[776]: I1025 10:53:39.713852     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/540dc871-78a9-4dd4-adb6-ae9d0481d23c-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-d2zpz\" (UID: \"540dc871-78a9-4dd4-adb6-ae9d0481d23c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-d2zpz"
	Oct 25 10:53:39 old-k8s-version-031983 kubelet[776]: W1025 10:53:39.961587     776 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c9e4fcd1d86890ade274f287b767861c437fe06d2f626866732741156c2dea19/crio-45d30537cf423edde419cbb40b8105e00d659c39570f87b808a0c52d3a3c7734 WatchSource:0}: Error finding container 45d30537cf423edde419cbb40b8105e00d659c39570f87b808a0c52d3a3c7734: Status 404 returned error can't find the container with id 45d30537cf423edde419cbb40b8105e00d659c39570f87b808a0c52d3a3c7734
	Oct 25 10:53:44 old-k8s-version-031983 kubelet[776]: I1025 10:53:44.221112     776 scope.go:117] "RemoveContainer" containerID="1d824a18f470bfc83287940358e699c624ab50e8bf4b25896168a1637e3913e3"
	Oct 25 10:53:45 old-k8s-version-031983 kubelet[776]: I1025 10:53:45.231548     776 scope.go:117] "RemoveContainer" containerID="1d824a18f470bfc83287940358e699c624ab50e8bf4b25896168a1637e3913e3"
	Oct 25 10:53:45 old-k8s-version-031983 kubelet[776]: I1025 10:53:45.231943     776 scope.go:117] "RemoveContainer" containerID="c799662f95ab077b9c2fb16a5af68d24c6fbc74374fbb205d03b6262a002ff9e"
	Oct 25 10:53:45 old-k8s-version-031983 kubelet[776]: E1025 10:53:45.232271     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wc9hh_kubernetes-dashboard(51ca2d1c-c8a7-4086-835f-2942a08f2e9d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wc9hh" podUID="51ca2d1c-c8a7-4086-835f-2942a08f2e9d"
	Oct 25 10:53:46 old-k8s-version-031983 kubelet[776]: I1025 10:53:46.237689     776 scope.go:117] "RemoveContainer" containerID="c799662f95ab077b9c2fb16a5af68d24c6fbc74374fbb205d03b6262a002ff9e"
	Oct 25 10:53:46 old-k8s-version-031983 kubelet[776]: E1025 10:53:46.237960     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wc9hh_kubernetes-dashboard(51ca2d1c-c8a7-4086-835f-2942a08f2e9d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wc9hh" podUID="51ca2d1c-c8a7-4086-835f-2942a08f2e9d"
	Oct 25 10:53:49 old-k8s-version-031983 kubelet[776]: I1025 10:53:49.907277     776 scope.go:117] "RemoveContainer" containerID="c799662f95ab077b9c2fb16a5af68d24c6fbc74374fbb205d03b6262a002ff9e"
	Oct 25 10:53:49 old-k8s-version-031983 kubelet[776]: E1025 10:53:49.907646     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wc9hh_kubernetes-dashboard(51ca2d1c-c8a7-4086-835f-2942a08f2e9d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wc9hh" podUID="51ca2d1c-c8a7-4086-835f-2942a08f2e9d"
	Oct 25 10:53:58 old-k8s-version-031983 kubelet[776]: I1025 10:53:58.270268     776 scope.go:117] "RemoveContainer" containerID="89d684ba83e381a7a64dfe04e56c3aeb40ac87bb22b331b2ff4e4a21ccbcf692"
	Oct 25 10:53:58 old-k8s-version-031983 kubelet[776]: I1025 10:53:58.292725     776 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-d2zpz" podStartSLOduration=10.018355918 podCreationTimestamp="2025-10-25 10:53:39 +0000 UTC" firstStartedPulling="2025-10-25 10:53:39.964974447 +0000 UTC m=+19.085006317" lastFinishedPulling="2025-10-25 10:53:49.239285107 +0000 UTC m=+28.359316985" observedRunningTime="2025-10-25 10:53:50.263746536 +0000 UTC m=+29.383778423" watchObservedRunningTime="2025-10-25 10:53:58.292666586 +0000 UTC m=+37.412698456"
	Oct 25 10:54:05 old-k8s-version-031983 kubelet[776]: I1025 10:54:05.069431     776 scope.go:117] "RemoveContainer" containerID="c799662f95ab077b9c2fb16a5af68d24c6fbc74374fbb205d03b6262a002ff9e"
	Oct 25 10:54:05 old-k8s-version-031983 kubelet[776]: I1025 10:54:05.307550     776 scope.go:117] "RemoveContainer" containerID="c799662f95ab077b9c2fb16a5af68d24c6fbc74374fbb205d03b6262a002ff9e"
	Oct 25 10:54:05 old-k8s-version-031983 kubelet[776]: I1025 10:54:05.307842     776 scope.go:117] "RemoveContainer" containerID="61207ccd40885efe969607525a97edab12360330c40d8763143e717d0f79d0a0"
	Oct 25 10:54:05 old-k8s-version-031983 kubelet[776]: E1025 10:54:05.308154     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wc9hh_kubernetes-dashboard(51ca2d1c-c8a7-4086-835f-2942a08f2e9d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wc9hh" podUID="51ca2d1c-c8a7-4086-835f-2942a08f2e9d"
	Oct 25 10:54:09 old-k8s-version-031983 kubelet[776]: I1025 10:54:09.906888     776 scope.go:117] "RemoveContainer" containerID="61207ccd40885efe969607525a97edab12360330c40d8763143e717d0f79d0a0"
	Oct 25 10:54:09 old-k8s-version-031983 kubelet[776]: E1025 10:54:09.907231     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wc9hh_kubernetes-dashboard(51ca2d1c-c8a7-4086-835f-2942a08f2e9d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wc9hh" podUID="51ca2d1c-c8a7-4086-835f-2942a08f2e9d"
	Oct 25 10:54:18 old-k8s-version-031983 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:54:18 old-k8s-version-031983 kubelet[776]: I1025 10:54:18.517018     776 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 25 10:54:18 old-k8s-version-031983 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:54:18 old-k8s-version-031983 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [aaf729d4b726bd4ed2dcc9bceb249d54230b6890b628f49f2294833bd5b31249] <==
	2025/10/25 10:53:49 Using namespace: kubernetes-dashboard
	2025/10/25 10:53:49 Using in-cluster config to connect to apiserver
	2025/10/25 10:53:49 Using secret token for csrf signing
	2025/10/25 10:53:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 10:53:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 10:53:49 Successful initial request to the apiserver, version: v1.28.0
	2025/10/25 10:53:49 Generating JWE encryption key
	2025/10/25 10:53:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 10:53:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 10:53:49 Initializing JWE encryption key from synchronized object
	2025/10/25 10:53:49 Creating in-cluster Sidecar client
	2025/10/25 10:53:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:53:49 Serving insecurely on HTTP port: 9090
	2025/10/25 10:54:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:53:49 Starting overwatch
	
	
	==> storage-provisioner [1b6f7fca84250e969eb92a69cd684590b8335cd2574fefd2cc8d891ee242afd1] <==
	I1025 10:53:58.319988       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:53:58.340290       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:53:58.340398       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 10:54:15.738336       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:54:15.738843       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-031983_938354f2-b9b3-4d86-bdbc-efdb23963044!
	I1025 10:54:15.741340       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0bd6d084-f7ff-4686-9d53-994a26c512ba", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-031983_938354f2-b9b3-4d86-bdbc-efdb23963044 became leader
	I1025 10:54:15.839261       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-031983_938354f2-b9b3-4d86-bdbc-efdb23963044!
	
	
	==> storage-provisioner [89d684ba83e381a7a64dfe04e56c3aeb40ac87bb22b331b2ff4e4a21ccbcf692] <==
	I1025 10:53:27.790217       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 10:53:57.794129       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-031983 -n old-k8s-version-031983
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-031983 -n old-k8s-version-031983: exit status 2 (372.67751ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-031983 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-031983
helpers_test.go:243: (dbg) docker inspect old-k8s-version-031983:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c9e4fcd1d86890ade274f287b767861c437fe06d2f626866732741156c2dea19",
	        "Created": "2025-10-25T10:51:50.262019678Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 441285,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:53:14.065553187Z",
	            "FinishedAt": "2025-10-25T10:53:13.180841674Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/c9e4fcd1d86890ade274f287b767861c437fe06d2f626866732741156c2dea19/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c9e4fcd1d86890ade274f287b767861c437fe06d2f626866732741156c2dea19/hostname",
	        "HostsPath": "/var/lib/docker/containers/c9e4fcd1d86890ade274f287b767861c437fe06d2f626866732741156c2dea19/hosts",
	        "LogPath": "/var/lib/docker/containers/c9e4fcd1d86890ade274f287b767861c437fe06d2f626866732741156c2dea19/c9e4fcd1d86890ade274f287b767861c437fe06d2f626866732741156c2dea19-json.log",
	        "Name": "/old-k8s-version-031983",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-031983:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-031983",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c9e4fcd1d86890ade274f287b767861c437fe06d2f626866732741156c2dea19",
	                "LowerDir": "/var/lib/docker/overlay2/f83171e8b997df441f44209753365f0b1cf2bf8af3f3f60c6899baef6933b87f-init/diff:/var/lib/docker/overlay2/a746402fafa86965bd9394d930bb96ec16982037f9f5e7f4f1de6d09576ff851/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f83171e8b997df441f44209753365f0b1cf2bf8af3f3f60c6899baef6933b87f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f83171e8b997df441f44209753365f0b1cf2bf8af3f3f60c6899baef6933b87f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f83171e8b997df441f44209753365f0b1cf2bf8af3f3f60c6899baef6933b87f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-031983",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-031983/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-031983",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-031983",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-031983",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c6c60a4f97644cb1728be4d0d6b4920511ec60fd91237bfe8de8afd68a822970",
	            "SandboxKey": "/var/run/docker/netns/c6c60a4f9764",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33413"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33414"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33417"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33415"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33416"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-031983": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:12:2a:f3:30:db",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f134955a6205418e30c262ff57f17637fbd69f7510dbb06a7800f5313ba135a3",
	                    "EndpointID": "318510742ef8a3e32b2967af06c6a8e66f17ce18b4d0218a118adf29bcd6c82e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-031983",
	                        "c9e4fcd1d868"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-031983 -n old-k8s-version-031983
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-031983 -n old-k8s-version-031983: exit status 2 (380.896876ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-031983 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-031983 logs -n 25: (1.369947086s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-759329 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ ssh     │ -p cilium-759329 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ ssh     │ -p cilium-759329 sudo containerd config dump                                                                                                                                                                                                  │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ ssh     │ -p cilium-759329 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ ssh     │ -p cilium-759329 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ ssh     │ -p cilium-759329 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ ssh     │ -p cilium-759329 sudo crio config                                                                                                                                                                                                             │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ delete  │ -p cilium-759329                                                                                                                                                                                                                              │ cilium-759329             │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:50 UTC │
	│ start   │ -p force-systemd-env-623432 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-623432  │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:50 UTC │
	│ start   │ -p kubernetes-upgrade-291330 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-291330 │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ start   │ -p kubernetes-upgrade-291330 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-291330 │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:50 UTC │
	│ delete  │ -p kubernetes-upgrade-291330                                                                                                                                                                                                                  │ kubernetes-upgrade-291330 │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:50 UTC │
	│ start   │ -p cert-expiration-736062 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-736062    │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:51 UTC │
	│ delete  │ -p force-systemd-env-623432                                                                                                                                                                                                                   │ force-systemd-env-623432  │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:50 UTC │
	│ start   │ -p cert-options-771620 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-771620       │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:51 UTC │
	│ ssh     │ cert-options-771620 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-771620       │ jenkins │ v1.37.0 │ 25 Oct 25 10:51 UTC │ 25 Oct 25 10:51 UTC │
	│ ssh     │ -p cert-options-771620 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-771620       │ jenkins │ v1.37.0 │ 25 Oct 25 10:51 UTC │ 25 Oct 25 10:51 UTC │
	│ delete  │ -p cert-options-771620                                                                                                                                                                                                                        │ cert-options-771620       │ jenkins │ v1.37.0 │ 25 Oct 25 10:51 UTC │ 25 Oct 25 10:51 UTC │
	│ start   │ -p old-k8s-version-031983 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-031983    │ jenkins │ v1.37.0 │ 25 Oct 25 10:51 UTC │ 25 Oct 25 10:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-031983 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-031983    │ jenkins │ v1.37.0 │ 25 Oct 25 10:52 UTC │                     │
	│ stop    │ -p old-k8s-version-031983 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-031983    │ jenkins │ v1.37.0 │ 25 Oct 25 10:53 UTC │ 25 Oct 25 10:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-031983 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-031983    │ jenkins │ v1.37.0 │ 25 Oct 25 10:53 UTC │ 25 Oct 25 10:53 UTC │
	│ start   │ -p old-k8s-version-031983 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-031983    │ jenkins │ v1.37.0 │ 25 Oct 25 10:53 UTC │ 25 Oct 25 10:54 UTC │
	│ image   │ old-k8s-version-031983 image list --format=json                                                                                                                                                                                               │ old-k8s-version-031983    │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:54 UTC │
	│ pause   │ -p old-k8s-version-031983 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-031983    │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:53:13
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:53:13.776842  441160 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:53:13.776970  441160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:53:13.776979  441160 out.go:374] Setting ErrFile to fd 2...
	I1025 10:53:13.776984  441160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:53:13.777249  441160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:53:13.777686  441160 out.go:368] Setting JSON to false
	I1025 10:53:13.778731  441160 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9345,"bootTime":1761380249,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 10:53:13.778807  441160 start.go:141] virtualization:  
	I1025 10:53:13.781946  441160 out.go:179] * [old-k8s-version-031983] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:53:13.785833  441160 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:53:13.785953  441160 notify.go:220] Checking for updates...
	I1025 10:53:13.791792  441160 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:53:13.794602  441160 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:53:13.797539  441160 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 10:53:13.800356  441160 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:53:13.803133  441160 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:53:13.806639  441160 config.go:182] Loaded profile config "old-k8s-version-031983": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:53:13.810297  441160 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1025 10:53:13.813114  441160 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:53:13.835254  441160 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:53:13.835366  441160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:53:13.901849  441160 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:53:13.891753748 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:53:13.901957  441160 docker.go:318] overlay module found
	I1025 10:53:13.905104  441160 out.go:179] * Using the docker driver based on existing profile
	I1025 10:53:13.908100  441160 start.go:305] selected driver: docker
	I1025 10:53:13.908119  441160 start.go:925] validating driver "docker" against &{Name:old-k8s-version-031983 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-031983 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:53:13.908242  441160 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:53:13.909240  441160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:53:13.973035  441160 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:53:13.957469096 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:53:13.973369  441160 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:53:13.973404  441160 cni.go:84] Creating CNI manager for ""
	I1025 10:53:13.973468  441160 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:53:13.973510  441160 start.go:349] cluster config:
	{Name:old-k8s-version-031983 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-031983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:53:13.978549  441160 out.go:179] * Starting "old-k8s-version-031983" primary control-plane node in "old-k8s-version-031983" cluster
	I1025 10:53:13.981520  441160 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:53:13.984382  441160 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:53:13.987175  441160 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 10:53:13.987240  441160 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1025 10:53:13.987253  441160 cache.go:58] Caching tarball of preloaded images
	I1025 10:53:13.987256  441160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:53:13.987353  441160 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:53:13.987363  441160 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1025 10:53:13.987504  441160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/config.json ...
	I1025 10:53:14.009820  441160 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:53:14.009847  441160 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:53:14.009862  441160 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:53:14.009895  441160 start.go:360] acquireMachinesLock for old-k8s-version-031983: {Name:mkea21c13c631a617ed8bc5861a3bc5db7c7a81f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:53:14.009959  441160 start.go:364] duration metric: took 39.64µs to acquireMachinesLock for "old-k8s-version-031983"
	I1025 10:53:14.010015  441160 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:53:14.010028  441160 fix.go:54] fixHost starting: 
	I1025 10:53:14.010305  441160 cli_runner.go:164] Run: docker container inspect old-k8s-version-031983 --format={{.State.Status}}
	I1025 10:53:14.028419  441160 fix.go:112] recreateIfNeeded on old-k8s-version-031983: state=Stopped err=<nil>
	W1025 10:53:14.028453  441160 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:53:14.031746  441160 out.go:252] * Restarting existing docker container for "old-k8s-version-031983" ...
	I1025 10:53:14.031866  441160 cli_runner.go:164] Run: docker start old-k8s-version-031983
	I1025 10:53:14.285508  441160 cli_runner.go:164] Run: docker container inspect old-k8s-version-031983 --format={{.State.Status}}
	I1025 10:53:14.306895  441160 kic.go:430] container "old-k8s-version-031983" state is running.
	I1025 10:53:14.307383  441160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-031983
	I1025 10:53:14.328042  441160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/config.json ...
	I1025 10:53:14.328266  441160 machine.go:93] provisionDockerMachine start ...
	I1025 10:53:14.328335  441160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:53:14.360183  441160 main.go:141] libmachine: Using SSH client type: native
	I1025 10:53:14.360515  441160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1025 10:53:14.360921  441160 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:53:14.362107  441160 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 10:53:17.513564  441160 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-031983
	
	I1025 10:53:17.513596  441160 ubuntu.go:182] provisioning hostname "old-k8s-version-031983"
	I1025 10:53:17.513663  441160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:53:17.539530  441160 main.go:141] libmachine: Using SSH client type: native
	I1025 10:53:17.539855  441160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1025 10:53:17.539874  441160 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-031983 && echo "old-k8s-version-031983" | sudo tee /etc/hostname
	I1025 10:53:17.701661  441160 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-031983
	
	I1025 10:53:17.701752  441160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:53:17.719242  441160 main.go:141] libmachine: Using SSH client type: native
	I1025 10:53:17.719637  441160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1025 10:53:17.719655  441160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-031983' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-031983/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-031983' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:53:17.874581  441160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:53:17.874609  441160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:53:17.874639  441160 ubuntu.go:190] setting up certificates
	I1025 10:53:17.874649  441160 provision.go:84] configureAuth start
	I1025 10:53:17.874721  441160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-031983
	I1025 10:53:17.892842  441160 provision.go:143] copyHostCerts
	I1025 10:53:17.893116  441160 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:53:17.893141  441160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:53:17.893226  441160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:53:17.893347  441160 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:53:17.893352  441160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:53:17.893378  441160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:53:17.893438  441160 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:53:17.893443  441160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:53:17.893466  441160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:53:17.893566  441160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-031983 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-031983]
	I1025 10:53:18.342720  441160 provision.go:177] copyRemoteCerts
	I1025 10:53:18.342788  441160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:53:18.342831  441160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:53:18.359622  441160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/old-k8s-version-031983/id_rsa Username:docker}
	I1025 10:53:18.465906  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:53:18.484763  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1025 10:53:18.502444  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:53:18.520965  441160 provision.go:87] duration metric: took 646.291938ms to configureAuth
	I1025 10:53:18.521034  441160 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:53:18.521313  441160 config.go:182] Loaded profile config "old-k8s-version-031983": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:53:18.521463  441160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:53:18.540474  441160 main.go:141] libmachine: Using SSH client type: native
	I1025 10:53:18.540778  441160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1025 10:53:18.540796  441160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:53:18.855887  441160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:53:18.855963  441160 machine.go:96] duration metric: took 4.527687332s to provisionDockerMachine
	I1025 10:53:18.855988  441160 start.go:293] postStartSetup for "old-k8s-version-031983" (driver="docker")
	I1025 10:53:18.856035  441160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:53:18.856135  441160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:53:18.856212  441160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:53:18.877597  441160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/old-k8s-version-031983/id_rsa Username:docker}
	I1025 10:53:18.982150  441160 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:53:18.985524  441160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:53:18.985555  441160 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:53:18.985567  441160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:53:18.985626  441160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:53:18.985714  441160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:53:18.985823  441160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:53:18.993374  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:53:19.018526  441160 start.go:296] duration metric: took 162.502251ms for postStartSetup
	I1025 10:53:19.018613  441160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:53:19.018677  441160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:53:19.036887  441160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/old-k8s-version-031983/id_rsa Username:docker}
	I1025 10:53:19.140060  441160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:53:19.144987  441160 fix.go:56] duration metric: took 5.134949577s for fixHost
	I1025 10:53:19.145016  441160 start.go:83] releasing machines lock for "old-k8s-version-031983", held for 5.135045012s
	I1025 10:53:19.145097  441160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-031983
	I1025 10:53:19.162686  441160 ssh_runner.go:195] Run: cat /version.json
	I1025 10:53:19.162756  441160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:53:19.163042  441160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:53:19.163104  441160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:53:19.181116  441160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/old-k8s-version-031983/id_rsa Username:docker}
	I1025 10:53:19.183991  441160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/old-k8s-version-031983/id_rsa Username:docker}
	I1025 10:53:19.285962  441160 ssh_runner.go:195] Run: systemctl --version
	I1025 10:53:19.377462  441160 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:53:19.415533  441160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:53:19.420136  441160 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:53:19.420261  441160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:53:19.428441  441160 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:53:19.428466  441160 start.go:495] detecting cgroup driver to use...
	I1025 10:53:19.428500  441160 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:53:19.428566  441160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:53:19.444790  441160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:53:19.458431  441160 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:53:19.458535  441160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:53:19.475152  441160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:53:19.488965  441160 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:53:19.629519  441160 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:53:19.754915  441160 docker.go:234] disabling docker service ...
	I1025 10:53:19.755048  441160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:53:19.770231  441160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:53:19.783993  441160 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:53:19.907961  441160 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:53:20.033332  441160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:53:20.048145  441160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:53:20.064786  441160 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1025 10:53:20.064870  441160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:53:20.074982  441160 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:53:20.075063  441160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:53:20.084852  441160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:53:20.094420  441160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:53:20.105438  441160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:53:20.114080  441160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:53:20.123547  441160 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:53:20.132752  441160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:53:20.141787  441160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:53:20.149651  441160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:53:20.157831  441160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:53:20.271455  441160 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:53:20.403967  441160 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:53:20.404047  441160 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:53:20.407916  441160 start.go:563] Will wait 60s for crictl version
	I1025 10:53:20.407979  441160 ssh_runner.go:195] Run: which crictl
	I1025 10:53:20.414454  441160 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:53:20.440643  441160 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:53:20.440803  441160 ssh_runner.go:195] Run: crio --version
	I1025 10:53:20.470864  441160 ssh_runner.go:195] Run: crio --version
	I1025 10:53:20.504716  441160 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1025 10:53:20.507717  441160 cli_runner.go:164] Run: docker network inspect old-k8s-version-031983 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:53:20.524165  441160 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 10:53:20.529642  441160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:53:20.539902  441160 kubeadm.go:883] updating cluster {Name:old-k8s-version-031983 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-031983 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:53:20.540023  441160 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 10:53:20.540075  441160 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:53:20.577921  441160 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:53:20.577951  441160 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:53:20.578039  441160 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:53:20.607425  441160 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:53:20.607447  441160 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:53:20.607454  441160 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1025 10:53:20.607589  441160 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-031983 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-031983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:53:20.607677  441160 ssh_runner.go:195] Run: crio config
	I1025 10:53:20.667448  441160 cni.go:84] Creating CNI manager for ""
	I1025 10:53:20.667473  441160 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:53:20.667496  441160 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:53:20.667546  441160 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-031983 NodeName:old-k8s-version-031983 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:53:20.667712  441160 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-031983"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:53:20.667791  441160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1025 10:53:20.675659  441160 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:53:20.675824  441160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:53:20.683822  441160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1025 10:53:20.696966  441160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:53:20.711474  441160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1025 10:53:20.725500  441160 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:53:20.729258  441160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:53:20.739184  441160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:53:20.860755  441160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:53:20.877971  441160 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983 for IP: 192.168.85.2
	I1025 10:53:20.878002  441160 certs.go:195] generating shared ca certs ...
	I1025 10:53:20.878025  441160 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:53:20.878226  441160 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:53:20.878301  441160 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:53:20.878318  441160 certs.go:257] generating profile certs ...
	I1025 10:53:20.878422  441160 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/client.key
	I1025 10:53:20.878516  441160 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/apiserver.key.11393817
	I1025 10:53:20.878589  441160 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/proxy-client.key
	I1025 10:53:20.878719  441160 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:53:20.878779  441160 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:53:20.878795  441160 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:53:20.878836  441160 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:53:20.878883  441160 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:53:20.878916  441160 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:53:20.878979  441160 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:53:20.879566  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:53:20.904000  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:53:20.924305  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:53:20.947424  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:53:20.973000  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1025 10:53:21.004236  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 10:53:21.030183  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:53:21.055404  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:53:21.080432  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:53:21.101890  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:53:21.123491  441160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:53:21.143051  441160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:53:21.158270  441160 ssh_runner.go:195] Run: openssl version
	I1025 10:53:21.164885  441160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:53:21.177311  441160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:53:21.181467  441160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:53:21.181535  441160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:53:21.223424  441160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:53:21.231235  441160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:53:21.239519  441160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:53:21.243170  441160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:53:21.243236  441160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:53:21.287344  441160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:53:21.295421  441160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:53:21.303898  441160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:53:21.308007  441160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:53:21.308106  441160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:53:21.354044  441160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:53:21.362267  441160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:53:21.366180  441160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:53:21.407371  441160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:53:21.453469  441160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:53:21.501020  441160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:53:21.578577  441160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:53:21.639140  441160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:53:21.736500  441160 kubeadm.go:400] StartCluster: {Name:old-k8s-version-031983 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-031983 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:53:21.736618  441160 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:53:21.736741  441160 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:53:21.796515  441160 cri.go:89] found id: "eeeddcccacdfbafcc1ee8c599ecbc7ea9e0d84371cb6d1703687ee61d8bb755f"
	I1025 10:53:21.796538  441160 cri.go:89] found id: "e8ffdbbd81192b725fe1a39da32c0b7c2876d35dcd77f38936cc2fce64d55965"
	I1025 10:53:21.796545  441160 cri.go:89] found id: "06385d2fbad2b260e0ee5f55b8d5cba605e51721e2396456ae2851909149c4a9"
	I1025 10:53:21.796559  441160 cri.go:89] found id: "daa7921a2bd460dd730ca528a51c0a28ebc5238cac5327f709bbab5a00da8e58"
	I1025 10:53:21.796589  441160 cri.go:89] found id: ""
	I1025 10:53:21.796646  441160 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:53:21.811344  441160 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:53:21Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:53:21.811475  441160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:53:21.826114  441160 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:53:21.826135  441160 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:53:21.826231  441160 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:53:21.840225  441160 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:53:21.840920  441160 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-031983" does not appear in /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:53:21.841290  441160 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-259409/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-031983" cluster setting kubeconfig missing "old-k8s-version-031983" context setting]
	I1025 10:53:21.841859  441160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:53:21.843928  441160 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:53:21.858029  441160 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1025 10:53:21.858121  441160 kubeadm.go:601] duration metric: took 31.923265ms to restartPrimaryControlPlane
	I1025 10:53:21.858137  441160 kubeadm.go:402] duration metric: took 121.646875ms to StartCluster
	I1025 10:53:21.858168  441160 settings.go:142] acquiring lock: {Name:mk78071aea90144fb2e8f63b90de4792d7a3522a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:53:21.858260  441160 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:53:21.859358  441160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:53:21.859643  441160 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:53:21.860102  441160 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:53:21.860183  441160 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-031983"
	I1025 10:53:21.860204  441160 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-031983"
	I1025 10:53:21.860204  441160 config.go:182] Loaded profile config "old-k8s-version-031983": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	W1025 10:53:21.860211  441160 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:53:21.860235  441160 host.go:66] Checking if "old-k8s-version-031983" exists ...
	I1025 10:53:21.860280  441160 addons.go:69] Setting dashboard=true in profile "old-k8s-version-031983"
	I1025 10:53:21.860301  441160 addons.go:238] Setting addon dashboard=true in "old-k8s-version-031983"
	W1025 10:53:21.860307  441160 addons.go:247] addon dashboard should already be in state true
	I1025 10:53:21.860326  441160 host.go:66] Checking if "old-k8s-version-031983" exists ...
	I1025 10:53:21.860684  441160 cli_runner.go:164] Run: docker container inspect old-k8s-version-031983 --format={{.State.Status}}
	I1025 10:53:21.860973  441160 cli_runner.go:164] Run: docker container inspect old-k8s-version-031983 --format={{.State.Status}}
	I1025 10:53:21.865942  441160 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-031983"
	I1025 10:53:21.866263  441160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-031983"
	I1025 10:53:21.866169  441160 out.go:179] * Verifying Kubernetes components...
	I1025 10:53:21.867362  441160 cli_runner.go:164] Run: docker container inspect old-k8s-version-031983 --format={{.State.Status}}
	I1025 10:53:21.871459  441160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:53:21.923079  441160 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:53:21.923228  441160 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:53:21.926173  441160 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:53:21.926198  441160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:53:21.926274  441160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:53:21.930969  441160 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 10:53:21.933959  441160 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:53:21.934238  441160 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:53:21.934325  441160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:53:21.945392  441160 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-031983"
	W1025 10:53:21.945415  441160 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:53:21.945439  441160 host.go:66] Checking if "old-k8s-version-031983" exists ...
	I1025 10:53:21.945875  441160 cli_runner.go:164] Run: docker container inspect old-k8s-version-031983 --format={{.State.Status}}
	I1025 10:53:22.003066  441160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/old-k8s-version-031983/id_rsa Username:docker}
	I1025 10:53:22.018521  441160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/old-k8s-version-031983/id_rsa Username:docker}
	I1025 10:53:22.027629  441160 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:53:22.027650  441160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:53:22.027722  441160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-031983
	I1025 10:53:22.056502  441160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/old-k8s-version-031983/id_rsa Username:docker}
	I1025 10:53:22.260179  441160 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:53:22.260256  441160 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:53:22.272828  441160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:53:22.313691  441160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:53:22.318622  441160 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:53:22.318646  441160 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:53:22.320449  441160 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-031983" to be "Ready" ...
	I1025 10:53:22.361045  441160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:53:22.379108  441160 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:53:22.379133  441160 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:53:22.444162  441160 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:53:22.444186  441160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:53:22.514329  441160 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:53:22.514355  441160 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:53:22.580443  441160 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:53:22.580470  441160 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:53:22.640026  441160 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:53:22.640051  441160 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:53:22.689928  441160 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:53:22.689954  441160 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:53:22.714220  441160 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:53:22.714244  441160 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:53:22.733190  441160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:53:26.820977  441160 node_ready.go:49] node "old-k8s-version-031983" is "Ready"
	I1025 10:53:26.821009  441160 node_ready.go:38] duration metric: took 4.500524839s for node "old-k8s-version-031983" to be "Ready" ...
	I1025 10:53:26.821031  441160 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:53:26.821090  441160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:53:28.601887  441160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.288147885s)
	I1025 10:53:28.602065  441160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.240995491s)
	I1025 10:53:29.137655  441160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.40441892s)
	I1025 10:53:29.137849  441160 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.316737651s)
	I1025 10:53:29.137879  441160 api_server.go:72] duration metric: took 7.27820138s to wait for apiserver process to appear ...
	I1025 10:53:29.137886  441160 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:53:29.137902  441160 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 10:53:29.140866  441160 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-031983 addons enable metrics-server
	
	I1025 10:53:29.143856  441160 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1025 10:53:29.146773  441160 addons.go:514] duration metric: took 7.286666082s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1025 10:53:29.148550  441160 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 10:53:29.150121  441160 api_server.go:141] control plane version: v1.28.0
	I1025 10:53:29.150154  441160 api_server.go:131] duration metric: took 12.259485ms to wait for apiserver health ...
	I1025 10:53:29.150164  441160 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:53:29.155275  441160 system_pods.go:59] 8 kube-system pods found
	I1025 10:53:29.155328  441160 system_pods.go:61] "coredns-5dd5756b68-jd2rz" [24ce3549-a06c-405e-943d-2982e2ee63de] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:53:29.155339  441160 system_pods.go:61] "etcd-old-k8s-version-031983" [7afeb15f-fef7-4c88-ba96-7cd4bd24b4a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:53:29.155350  441160 system_pods.go:61] "kindnet-2sbx5" [b129bb16-e936-4865-b06a-a71756a88fa9] Running
	I1025 10:53:29.155358  441160 system_pods.go:61] "kube-apiserver-old-k8s-version-031983" [9a0fa9ff-e383-482b-9217-5089637f3579] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:53:29.155372  441160 system_pods.go:61] "kube-controller-manager-old-k8s-version-031983" [1e037440-e4e2-4392-8c8d-ac2bcceb2723] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:53:29.155378  441160 system_pods.go:61] "kube-proxy-q597g" [21cc5901-1ab1-495b-9b85-3812b03b4ddc] Running
	I1025 10:53:29.155387  441160 system_pods.go:61] "kube-scheduler-old-k8s-version-031983" [e163349f-3264-496f-b34f-7ad2a108c7fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:53:29.155405  441160 system_pods.go:61] "storage-provisioner" [7a27f19d-8bc4-4730-bb35-fd6d4311ef52] Running
	I1025 10:53:29.155418  441160 system_pods.go:74] duration metric: took 5.248057ms to wait for pod list to return data ...
	I1025 10:53:29.155426  441160 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:53:29.158044  441160 default_sa.go:45] found service account: "default"
	I1025 10:53:29.158082  441160 default_sa.go:55] duration metric: took 2.650174ms for default service account to be created ...
	I1025 10:53:29.158092  441160 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:53:29.161800  441160 system_pods.go:86] 8 kube-system pods found
	I1025 10:53:29.161846  441160 system_pods.go:89] "coredns-5dd5756b68-jd2rz" [24ce3549-a06c-405e-943d-2982e2ee63de] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:53:29.161858  441160 system_pods.go:89] "etcd-old-k8s-version-031983" [7afeb15f-fef7-4c88-ba96-7cd4bd24b4a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:53:29.161873  441160 system_pods.go:89] "kindnet-2sbx5" [b129bb16-e936-4865-b06a-a71756a88fa9] Running
	I1025 10:53:29.161880  441160 system_pods.go:89] "kube-apiserver-old-k8s-version-031983" [9a0fa9ff-e383-482b-9217-5089637f3579] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:53:29.161896  441160 system_pods.go:89] "kube-controller-manager-old-k8s-version-031983" [1e037440-e4e2-4392-8c8d-ac2bcceb2723] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:53:29.161913  441160 system_pods.go:89] "kube-proxy-q597g" [21cc5901-1ab1-495b-9b85-3812b03b4ddc] Running
	I1025 10:53:29.161925  441160 system_pods.go:89] "kube-scheduler-old-k8s-version-031983" [e163349f-3264-496f-b34f-7ad2a108c7fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:53:29.161929  441160 system_pods.go:89] "storage-provisioner" [7a27f19d-8bc4-4730-bb35-fd6d4311ef52] Running
	I1025 10:53:29.161936  441160 system_pods.go:126] duration metric: took 3.839026ms to wait for k8s-apps to be running ...
	I1025 10:53:29.161950  441160 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:53:29.162061  441160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:53:29.189567  441160 system_svc.go:56] duration metric: took 27.60832ms WaitForService to wait for kubelet
	I1025 10:53:29.189610  441160 kubeadm.go:586] duration metric: took 7.32993012s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:53:29.189630  441160 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:53:29.192906  441160 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:53:29.192955  441160 node_conditions.go:123] node cpu capacity is 2
	I1025 10:53:29.192968  441160 node_conditions.go:105] duration metric: took 3.33274ms to run NodePressure ...
	I1025 10:53:29.192982  441160 start.go:241] waiting for startup goroutines ...
	I1025 10:53:29.192989  441160 start.go:246] waiting for cluster config update ...
	I1025 10:53:29.193000  441160 start.go:255] writing updated cluster config ...
	I1025 10:53:29.193324  441160 ssh_runner.go:195] Run: rm -f paused
	I1025 10:53:29.196992  441160 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:53:29.201275  441160 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-jd2rz" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:53:31.207741  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:53:33.207867  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:53:35.707524  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:53:37.708750  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:53:40.210587  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:53:42.708348  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:53:45.214214  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:53:47.712464  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:53:50.207287  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:53:52.207728  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:53:54.707389  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:53:56.707660  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:53:59.207035  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:54:01.207818  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	W1025 10:54:03.706687  441160 pod_ready.go:104] pod "coredns-5dd5756b68-jd2rz" is not "Ready", error: <nil>
	I1025 10:54:04.708791  441160 pod_ready.go:94] pod "coredns-5dd5756b68-jd2rz" is "Ready"
	I1025 10:54:04.708821  441160 pod_ready.go:86] duration metric: took 35.507517209s for pod "coredns-5dd5756b68-jd2rz" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:54:04.711896  441160 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-031983" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:54:04.716999  441160 pod_ready.go:94] pod "etcd-old-k8s-version-031983" is "Ready"
	I1025 10:54:04.717025  441160 pod_ready.go:86] duration metric: took 5.103374ms for pod "etcd-old-k8s-version-031983" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:54:04.720524  441160 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-031983" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:54:04.725413  441160 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-031983" is "Ready"
	I1025 10:54:04.725438  441160 pod_ready.go:86] duration metric: took 4.889776ms for pod "kube-apiserver-old-k8s-version-031983" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:54:04.728841  441160 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-031983" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:54:04.905635  441160 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-031983" is "Ready"
	I1025 10:54:04.905719  441160 pod_ready.go:86] duration metric: took 176.851299ms for pod "kube-controller-manager-old-k8s-version-031983" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:54:05.110811  441160 pod_ready.go:83] waiting for pod "kube-proxy-q597g" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:54:05.505877  441160 pod_ready.go:94] pod "kube-proxy-q597g" is "Ready"
	I1025 10:54:05.505909  441160 pod_ready.go:86] duration metric: took 395.01743ms for pod "kube-proxy-q597g" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:54:05.706063  441160 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-031983" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:54:06.106840  441160 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-031983" is "Ready"
	I1025 10:54:06.106873  441160 pod_ready.go:86] duration metric: took 400.779419ms for pod "kube-scheduler-old-k8s-version-031983" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:54:06.106886  441160 pod_ready.go:40] duration metric: took 36.909861799s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:54:06.166539  441160 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1025 10:54:06.169868  441160 out.go:203] 
	W1025 10:54:06.173017  441160 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1025 10:54:06.176032  441160 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1025 10:54:06.178986  441160 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-031983" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 10:54:05 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:05.073355897Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:54:05 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:05.080955434Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:54:05 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:05.08152947Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:54:05 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:05.099463985Z" level=info msg="Created container 61207ccd40885efe969607525a97edab12360330c40d8763143e717d0f79d0a0: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wc9hh/dashboard-metrics-scraper" id=0af1547e-d731-4642-9c85-a72aabb3dc0b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:54:05 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:05.100622396Z" level=info msg="Starting container: 61207ccd40885efe969607525a97edab12360330c40d8763143e717d0f79d0a0" id=f1c528fa-8c14-47df-bc4c-424d6c7544ad name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:54:05 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:05.103301493Z" level=info msg="Started container" PID=1646 containerID=61207ccd40885efe969607525a97edab12360330c40d8763143e717d0f79d0a0 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wc9hh/dashboard-metrics-scraper id=f1c528fa-8c14-47df-bc4c-424d6c7544ad name=/runtime.v1.RuntimeService/StartContainer sandboxID=bc0276c98573e2b947a5b228fad1c038cc4923b08bc9c332a68c1d6ce9eea2ee
	Oct 25 10:54:05 old-k8s-version-031983 conmon[1644]: conmon 61207ccd40885efe9696 <ninfo>: container 1646 exited with status 1
	Oct 25 10:54:05 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:05.309866829Z" level=info msg="Removing container: c799662f95ab077b9c2fb16a5af68d24c6fbc74374fbb205d03b6262a002ff9e" id=e4f48877-5ac2-4110-8baa-94ad75430e60 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:54:05 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:05.318275201Z" level=info msg="Error loading conmon cgroup of container c799662f95ab077b9c2fb16a5af68d24c6fbc74374fbb205d03b6262a002ff9e: cgroup deleted" id=e4f48877-5ac2-4110-8baa-94ad75430e60 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:54:05 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:05.323000283Z" level=info msg="Removed container c799662f95ab077b9c2fb16a5af68d24c6fbc74374fbb205d03b6262a002ff9e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wc9hh/dashboard-metrics-scraper" id=e4f48877-5ac2-4110-8baa-94ad75430e60 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.942810758Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.947661313Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.947703274Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.94772588Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.951161825Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.951198297Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.951223077Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.955693641Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.955732222Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.95576181Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.959428847Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.959464211Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.959488572Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.962868427Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:54:07 old-k8s-version-031983 crio[650]: time="2025-10-25T10:54:07.962905465Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	61207ccd40885       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago       Exited              dashboard-metrics-scraper   2                   bc0276c98573e       dashboard-metrics-scraper-5f989dc9cf-wc9hh       kubernetes-dashboard
	1b6f7fca84250       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   4b19f19b0f6c2       storage-provisioner                              kube-system
	aaf729d4b726b       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   33 seconds ago       Running             kubernetes-dashboard        0                   45d30537cf423       kubernetes-dashboard-8694d4445c-d2zpz            kubernetes-dashboard
	282bf80083c3f       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           55 seconds ago       Running             coredns                     1                   b2ef1972039f0       coredns-5dd5756b68-jd2rz                         kube-system
	8018aae064ea4       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   60cb06078bfbf       busybox                                          default
	61a0b589a5afa       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   7bf495c1e29a8       kindnet-2sbx5                                    kube-system
	9fd9a8a5cc009       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           55 seconds ago       Running             kube-proxy                  1                   4a3a44f303132       kube-proxy-q597g                                 kube-system
	89d684ba83e38       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   4b19f19b0f6c2       storage-provisioner                              kube-system
	eeeddcccacdfb       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   650bb121db0bf       kube-apiserver-old-k8s-version-031983            kube-system
	e8ffdbbd81192       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   371bcd9ffbd93       etcd-old-k8s-version-031983                      kube-system
	06385d2fbad2b       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   3f873df199b4e       kube-scheduler-old-k8s-version-031983            kube-system
	daa7921a2bd46       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   7e360217702e4       kube-controller-manager-old-k8s-version-031983   kube-system
	
	
	==> coredns [282bf80083c3f2f30757b4a4e969d8d35b11cd8b2d4a79b5d913e2e97221898b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34008 - 20101 "HINFO IN 8084456919790498261.6056145886745122556. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015860437s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-031983
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-031983
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=old-k8s-version-031983
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_52_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:52:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-031983
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:54:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:53:57 +0000   Sat, 25 Oct 2025 10:52:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:53:57 +0000   Sat, 25 Oct 2025 10:52:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:53:57 +0000   Sat, 25 Oct 2025 10:52:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:53:57 +0000   Sat, 25 Oct 2025 10:52:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-031983
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                d37866a6-3d06-4c4f-bdc5-afc6ab378351
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-5dd5756b68-jd2rz                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     112s
	  kube-system                 etcd-old-k8s-version-031983                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m4s
	  kube-system                 kindnet-2sbx5                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-old-k8s-version-031983             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-old-k8s-version-031983    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-q597g                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-old-k8s-version-031983             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-wc9hh        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-d2zpz             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 111s                   kube-proxy       
	  Normal  Starting                 55s                    kube-proxy       
	  Normal  Starting                 2m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-031983 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-031983 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-031983 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m4s                   kubelet          Node old-k8s-version-031983 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m4s                   kubelet          Node old-k8s-version-031983 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m4s                   kubelet          Node old-k8s-version-031983 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m4s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           113s                   node-controller  Node old-k8s-version-031983 event: Registered Node old-k8s-version-031983 in Controller
	  Normal  NodeReady                97s                    kubelet          Node old-k8s-version-031983 status is now: NodeReady
	  Normal  Starting                 63s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)      kubelet          Node old-k8s-version-031983 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)      kubelet          Node old-k8s-version-031983 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)      kubelet          Node old-k8s-version-031983 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                    node-controller  Node old-k8s-version-031983 event: Registered Node old-k8s-version-031983 in Controller
	
	
	==> dmesg <==
	[Oct25 10:25] overlayfs: idmapped layers are currently not supported
	[Oct25 10:30] overlayfs: idmapped layers are currently not supported
	[Oct25 10:31] overlayfs: idmapped layers are currently not supported
	[Oct25 10:32] overlayfs: idmapped layers are currently not supported
	[Oct25 10:33] overlayfs: idmapped layers are currently not supported
	[Oct25 10:34] overlayfs: idmapped layers are currently not supported
	[Oct25 10:36] overlayfs: idmapped layers are currently not supported
	[ +23.146409] overlayfs: idmapped layers are currently not supported
	[ +12.090113] overlayfs: idmapped layers are currently not supported
	[Oct25 10:37] overlayfs: idmapped layers are currently not supported
	[ +24.992751] overlayfs: idmapped layers are currently not supported
	[Oct25 10:38] overlayfs: idmapped layers are currently not supported
	[Oct25 10:39] overlayfs: idmapped layers are currently not supported
	[Oct25 10:40] overlayfs: idmapped layers are currently not supported
	[Oct25 10:41] overlayfs: idmapped layers are currently not supported
	[Oct25 10:43] overlayfs: idmapped layers are currently not supported
	[Oct25 10:45] overlayfs: idmapped layers are currently not supported
	[ +32.163680] overlayfs: idmapped layers are currently not supported
	[Oct25 10:47] overlayfs: idmapped layers are currently not supported
	[Oct25 10:49] overlayfs: idmapped layers are currently not supported
	[Oct25 10:50] overlayfs: idmapped layers are currently not supported
	[Oct25 10:51] overlayfs: idmapped layers are currently not supported
	[  +5.236078] overlayfs: idmapped layers are currently not supported
	[Oct25 10:52] overlayfs: idmapped layers are currently not supported
	[Oct25 10:53] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e8ffdbbd81192b725fe1a39da32c0b7c2876d35dcd77f38936cc2fce64d55965] <==
	{"level":"info","ts":"2025-10-25T10:53:22.087578Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-25T10:53:22.073022Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-10-25T10:53:22.073174Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T10:53:22.087655Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T10:53:22.087672Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T10:53:22.087216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-25T10:53:22.088314Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-25T10:53:22.088635Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T10:53:22.08867Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T10:53:22.072916Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-25T10:53:22.088897Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-25T10:53:23.174039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-25T10:53:23.174152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-25T10:53:23.174209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-25T10:53:23.174253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-25T10:53:23.174289Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-25T10:53:23.174333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-25T10:53:23.174363Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-25T10:53:23.18075Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-031983 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-25T10:53:23.180975Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T10:53:23.181617Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T10:53:23.183777Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-25T10:53:23.182004Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-25T10:53:23.190609Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-25T10:53:23.19069Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:54:23 up  2:36,  0 user,  load average: 2.00, 3.23, 2.76
	Linux old-k8s-version-031983 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [61a0b589a5afa1251610e31cdc2e6f467b5f10034155e9f8f31f35ca1b7206db] <==
	I1025 10:53:27.749399       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:53:27.749674       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:53:27.749799       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:53:27.749809       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:53:27.749822       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:53:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:53:27.946586       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:53:27.946617       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:53:27.946626       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:53:27.946950       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:53:57.941340       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 10:53:57.946928       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 10:53:57.946928       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 10:53:57.948241       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1025 10:53:59.346786       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:53:59.346817       1 metrics.go:72] Registering metrics
	I1025 10:53:59.346888       1 controller.go:711] "Syncing nftables rules"
	I1025 10:54:07.942491       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:54:07.942558       1 main.go:301] handling current node
	I1025 10:54:17.948026       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:54:17.948060       1 main.go:301] handling current node
	
	
	==> kube-apiserver [eeeddcccacdfbafcc1ee8c599ecbc7ea9e0d84371cb6d1703687ee61d8bb755f] <==
	I1025 10:53:26.846590       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1025 10:53:26.851681       1 shared_informer.go:318] Caches are synced for configmaps
	I1025 10:53:26.856309       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:53:26.880766       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1025 10:53:26.882209       1 aggregator.go:166] initial CRD sync complete...
	I1025 10:53:26.882269       1 autoregister_controller.go:141] Starting autoregister controller
	I1025 10:53:26.882277       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:53:26.882285       1 cache.go:39] Caches are synced for autoregister controller
	E1025 10:53:26.913818       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 10:53:26.934769       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1025 10:53:26.934860       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1025 10:53:26.935026       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1025 10:53:26.947219       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1025 10:53:27.484532       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:53:28.897430       1 controller.go:624] quota admission added evaluator for: namespaces
	I1025 10:53:28.965033       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1025 10:53:28.995825       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:53:29.009444       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:53:29.021367       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1025 10:53:29.107928       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.131.130"}
	I1025 10:53:29.129704       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.52.231"}
	I1025 10:53:39.359201       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:53:39.460809       1 controller.go:624] quota admission added evaluator for: endpoints
	I1025 10:53:39.460808       1 controller.go:624] quota admission added evaluator for: endpoints
	I1025 10:53:39.559508       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [daa7921a2bd460dd730ca528a51c0a28ebc5238cac5327f709bbab5a00da8e58] <==
	I1025 10:53:39.417786       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="362.353959ms"
	I1025 10:53:39.417880       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.95µs"
	I1025 10:53:39.565025       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1025 10:53:39.571017       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1025 10:53:39.591798       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-wc9hh"
	I1025 10:53:39.597840       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 10:53:39.598855       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-d2zpz"
	I1025 10:53:39.608746       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="43.765318ms"
	I1025 10:53:39.616779       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="47.247098ms"
	I1025 10:53:39.636608       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="19.771693ms"
	I1025 10:53:39.636875       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="50.955µs"
	I1025 10:53:39.638920       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="29.787992ms"
	I1025 10:53:39.639023       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="59.693µs"
	I1025 10:53:39.648891       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 10:53:39.648973       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1025 10:53:39.658549       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="62.77µs"
	I1025 10:53:44.237138       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="45.67µs"
	I1025 10:53:45.258924       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="116.005µs"
	I1025 10:53:46.255083       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.876µs"
	I1025 10:53:50.281419       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="17.512752ms"
	I1025 10:53:50.281626       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="84.776µs"
	I1025 10:54:04.234660       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.822945ms"
	I1025 10:54:04.235056       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="57.912µs"
	I1025 10:54:05.333027       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.398µs"
	I1025 10:54:09.933130       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.284µs"
	
	
	==> kube-proxy [9fd9a8a5cc009ac96b64da624b784533a1e04c500388713c6f7b31e40b933a8a] <==
	I1025 10:53:27.842623       1 server_others.go:69] "Using iptables proxy"
	I1025 10:53:27.870350       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1025 10:53:27.896956       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:53:27.898733       1 server_others.go:152] "Using iptables Proxier"
	I1025 10:53:27.898826       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1025 10:53:27.898861       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1025 10:53:27.898934       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 10:53:27.899178       1 server.go:846] "Version info" version="v1.28.0"
	I1025 10:53:27.899386       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:53:27.900075       1 config.go:188] "Starting service config controller"
	I1025 10:53:27.900166       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 10:53:27.900214       1 config.go:97] "Starting endpoint slice config controller"
	I1025 10:53:27.900243       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 10:53:27.902774       1 config.go:315] "Starting node config controller"
	I1025 10:53:27.902841       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 10:53:28.000373       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1025 10:53:28.000427       1 shared_informer.go:318] Caches are synced for service config
	I1025 10:53:28.006252       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [06385d2fbad2b260e0ee5f55b8d5cba605e51721e2396456ae2851909149c4a9] <==
	I1025 10:53:26.057769       1 serving.go:348] Generated self-signed cert in-memory
	I1025 10:53:27.239614       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1025 10:53:27.239719       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:53:27.257832       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1025 10:53:27.258557       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1025 10:53:27.258602       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1025 10:53:27.270592       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1025 10:53:27.258619       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:53:27.258628       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:53:27.273444       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1025 10:53:27.271031       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 10:53:27.373518       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1025 10:53:27.378362       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 10:53:27.379508       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Oct 25 10:53:39 old-k8s-version-031983 kubelet[776]: I1025 10:53:39.713719     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8sgz\" (UniqueName: \"kubernetes.io/projected/51ca2d1c-c8a7-4086-835f-2942a08f2e9d-kube-api-access-g8sgz\") pod \"dashboard-metrics-scraper-5f989dc9cf-wc9hh\" (UID: \"51ca2d1c-c8a7-4086-835f-2942a08f2e9d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wc9hh"
	Oct 25 10:53:39 old-k8s-version-031983 kubelet[776]: I1025 10:53:39.713789     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/51ca2d1c-c8a7-4086-835f-2942a08f2e9d-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-wc9hh\" (UID: \"51ca2d1c-c8a7-4086-835f-2942a08f2e9d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wc9hh"
	Oct 25 10:53:39 old-k8s-version-031983 kubelet[776]: I1025 10:53:39.713824     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdt22\" (UniqueName: \"kubernetes.io/projected/540dc871-78a9-4dd4-adb6-ae9d0481d23c-kube-api-access-mdt22\") pod \"kubernetes-dashboard-8694d4445c-d2zpz\" (UID: \"540dc871-78a9-4dd4-adb6-ae9d0481d23c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-d2zpz"
	Oct 25 10:53:39 old-k8s-version-031983 kubelet[776]: I1025 10:53:39.713852     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/540dc871-78a9-4dd4-adb6-ae9d0481d23c-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-d2zpz\" (UID: \"540dc871-78a9-4dd4-adb6-ae9d0481d23c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-d2zpz"
	Oct 25 10:53:39 old-k8s-version-031983 kubelet[776]: W1025 10:53:39.961587     776 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c9e4fcd1d86890ade274f287b767861c437fe06d2f626866732741156c2dea19/crio-45d30537cf423edde419cbb40b8105e00d659c39570f87b808a0c52d3a3c7734 WatchSource:0}: Error finding container 45d30537cf423edde419cbb40b8105e00d659c39570f87b808a0c52d3a3c7734: Status 404 returned error can't find the container with id 45d30537cf423edde419cbb40b8105e00d659c39570f87b808a0c52d3a3c7734
	Oct 25 10:53:44 old-k8s-version-031983 kubelet[776]: I1025 10:53:44.221112     776 scope.go:117] "RemoveContainer" containerID="1d824a18f470bfc83287940358e699c624ab50e8bf4b25896168a1637e3913e3"
	Oct 25 10:53:45 old-k8s-version-031983 kubelet[776]: I1025 10:53:45.231548     776 scope.go:117] "RemoveContainer" containerID="1d824a18f470bfc83287940358e699c624ab50e8bf4b25896168a1637e3913e3"
	Oct 25 10:53:45 old-k8s-version-031983 kubelet[776]: I1025 10:53:45.231943     776 scope.go:117] "RemoveContainer" containerID="c799662f95ab077b9c2fb16a5af68d24c6fbc74374fbb205d03b6262a002ff9e"
	Oct 25 10:53:45 old-k8s-version-031983 kubelet[776]: E1025 10:53:45.232271     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wc9hh_kubernetes-dashboard(51ca2d1c-c8a7-4086-835f-2942a08f2e9d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wc9hh" podUID="51ca2d1c-c8a7-4086-835f-2942a08f2e9d"
	Oct 25 10:53:46 old-k8s-version-031983 kubelet[776]: I1025 10:53:46.237689     776 scope.go:117] "RemoveContainer" containerID="c799662f95ab077b9c2fb16a5af68d24c6fbc74374fbb205d03b6262a002ff9e"
	Oct 25 10:53:46 old-k8s-version-031983 kubelet[776]: E1025 10:53:46.237960     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wc9hh_kubernetes-dashboard(51ca2d1c-c8a7-4086-835f-2942a08f2e9d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wc9hh" podUID="51ca2d1c-c8a7-4086-835f-2942a08f2e9d"
	Oct 25 10:53:49 old-k8s-version-031983 kubelet[776]: I1025 10:53:49.907277     776 scope.go:117] "RemoveContainer" containerID="c799662f95ab077b9c2fb16a5af68d24c6fbc74374fbb205d03b6262a002ff9e"
	Oct 25 10:53:49 old-k8s-version-031983 kubelet[776]: E1025 10:53:49.907646     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wc9hh_kubernetes-dashboard(51ca2d1c-c8a7-4086-835f-2942a08f2e9d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wc9hh" podUID="51ca2d1c-c8a7-4086-835f-2942a08f2e9d"
	Oct 25 10:53:58 old-k8s-version-031983 kubelet[776]: I1025 10:53:58.270268     776 scope.go:117] "RemoveContainer" containerID="89d684ba83e381a7a64dfe04e56c3aeb40ac87bb22b331b2ff4e4a21ccbcf692"
	Oct 25 10:53:58 old-k8s-version-031983 kubelet[776]: I1025 10:53:58.292725     776 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-d2zpz" podStartSLOduration=10.018355918 podCreationTimestamp="2025-10-25 10:53:39 +0000 UTC" firstStartedPulling="2025-10-25 10:53:39.964974447 +0000 UTC m=+19.085006317" lastFinishedPulling="2025-10-25 10:53:49.239285107 +0000 UTC m=+28.359316985" observedRunningTime="2025-10-25 10:53:50.263746536 +0000 UTC m=+29.383778423" watchObservedRunningTime="2025-10-25 10:53:58.292666586 +0000 UTC m=+37.412698456"
	Oct 25 10:54:05 old-k8s-version-031983 kubelet[776]: I1025 10:54:05.069431     776 scope.go:117] "RemoveContainer" containerID="c799662f95ab077b9c2fb16a5af68d24c6fbc74374fbb205d03b6262a002ff9e"
	Oct 25 10:54:05 old-k8s-version-031983 kubelet[776]: I1025 10:54:05.307550     776 scope.go:117] "RemoveContainer" containerID="c799662f95ab077b9c2fb16a5af68d24c6fbc74374fbb205d03b6262a002ff9e"
	Oct 25 10:54:05 old-k8s-version-031983 kubelet[776]: I1025 10:54:05.307842     776 scope.go:117] "RemoveContainer" containerID="61207ccd40885efe969607525a97edab12360330c40d8763143e717d0f79d0a0"
	Oct 25 10:54:05 old-k8s-version-031983 kubelet[776]: E1025 10:54:05.308154     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wc9hh_kubernetes-dashboard(51ca2d1c-c8a7-4086-835f-2942a08f2e9d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wc9hh" podUID="51ca2d1c-c8a7-4086-835f-2942a08f2e9d"
	Oct 25 10:54:09 old-k8s-version-031983 kubelet[776]: I1025 10:54:09.906888     776 scope.go:117] "RemoveContainer" containerID="61207ccd40885efe969607525a97edab12360330c40d8763143e717d0f79d0a0"
	Oct 25 10:54:09 old-k8s-version-031983 kubelet[776]: E1025 10:54:09.907231     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-wc9hh_kubernetes-dashboard(51ca2d1c-c8a7-4086-835f-2942a08f2e9d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-wc9hh" podUID="51ca2d1c-c8a7-4086-835f-2942a08f2e9d"
	Oct 25 10:54:18 old-k8s-version-031983 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:54:18 old-k8s-version-031983 kubelet[776]: I1025 10:54:18.517018     776 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 25 10:54:18 old-k8s-version-031983 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:54:18 old-k8s-version-031983 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [aaf729d4b726bd4ed2dcc9bceb249d54230b6890b628f49f2294833bd5b31249] <==
	2025/10/25 10:53:49 Starting overwatch
	2025/10/25 10:53:49 Using namespace: kubernetes-dashboard
	2025/10/25 10:53:49 Using in-cluster config to connect to apiserver
	2025/10/25 10:53:49 Using secret token for csrf signing
	2025/10/25 10:53:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 10:53:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 10:53:49 Successful initial request to the apiserver, version: v1.28.0
	2025/10/25 10:53:49 Generating JWE encryption key
	2025/10/25 10:53:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 10:53:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 10:53:49 Initializing JWE encryption key from synchronized object
	2025/10/25 10:53:49 Creating in-cluster Sidecar client
	2025/10/25 10:53:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:53:49 Serving insecurely on HTTP port: 9090
	2025/10/25 10:54:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1b6f7fca84250e969eb92a69cd684590b8335cd2574fefd2cc8d891ee242afd1] <==
	I1025 10:53:58.319988       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:53:58.340290       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:53:58.340398       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 10:54:15.738336       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:54:15.738843       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-031983_938354f2-b9b3-4d86-bdbc-efdb23963044!
	I1025 10:54:15.741340       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0bd6d084-f7ff-4686-9d53-994a26c512ba", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-031983_938354f2-b9b3-4d86-bdbc-efdb23963044 became leader
	I1025 10:54:15.839261       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-031983_938354f2-b9b3-4d86-bdbc-efdb23963044!
	
	
	==> storage-provisioner [89d684ba83e381a7a64dfe04e56c3aeb40ac87bb22b331b2ff4e4a21ccbcf692] <==
	I1025 10:53:27.790217       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 10:53:57.794129       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-031983 -n old-k8s-version-031983
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-031983 -n old-k8s-version-031983: exit status 2 (408.915003ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-031983 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-223394 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-223394 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (258.945101ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:55:55Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-223394 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-223394 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-223394 describe deploy/metrics-server -n kube-system: exit status 1 (82.610778ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-223394 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-223394
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-223394:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fdfe0713435ec0c9d963f0d524447bf680a521a643dff88e212a946e59a268a7",
	        "Created": "2025-10-25T10:54:33.801036185Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 445113,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:54:33.868540401Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/fdfe0713435ec0c9d963f0d524447bf680a521a643dff88e212a946e59a268a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fdfe0713435ec0c9d963f0d524447bf680a521a643dff88e212a946e59a268a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/fdfe0713435ec0c9d963f0d524447bf680a521a643dff88e212a946e59a268a7/hosts",
	        "LogPath": "/var/lib/docker/containers/fdfe0713435ec0c9d963f0d524447bf680a521a643dff88e212a946e59a268a7/fdfe0713435ec0c9d963f0d524447bf680a521a643dff88e212a946e59a268a7-json.log",
	        "Name": "/default-k8s-diff-port-223394",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-223394:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-223394",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fdfe0713435ec0c9d963f0d524447bf680a521a643dff88e212a946e59a268a7",
	                "LowerDir": "/var/lib/docker/overlay2/16c3b01afa0b4ec1bbf75b73359cd04d1fc7ed7d6a6cc96f08daeb4bea593cde-init/diff:/var/lib/docker/overlay2/a746402fafa86965bd9394d930bb96ec16982037f9f5e7f4f1de6d09576ff851/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16c3b01afa0b4ec1bbf75b73359cd04d1fc7ed7d6a6cc96f08daeb4bea593cde/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16c3b01afa0b4ec1bbf75b73359cd04d1fc7ed7d6a6cc96f08daeb4bea593cde/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16c3b01afa0b4ec1bbf75b73359cd04d1fc7ed7d6a6cc96f08daeb4bea593cde/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-223394",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-223394/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-223394",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-223394",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-223394",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0be941d4e3cb54e2bb68108060d4d67f5c3aa3c0959b6252f0e0870fd308c05e",
	            "SandboxKey": "/var/run/docker/netns/0be941d4e3cb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-223394": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:42:b0:ed:e2:61",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f8140ea88edc3e6f9170c2a8375ca78b30531642cc0a79f4070e57085e0519f4",
	                    "EndpointID": "9527f6903a68fbbb4ce8deaa7e164556582f8c1f16634278fc654c3c71e89a2f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-223394",
	                        "fdfe0713435e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-223394 -n default-k8s-diff-port-223394
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-223394 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-223394 logs -n 25: (1.213444353s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cilium-759329                                                                                                                                                                                                                              │ cilium-759329                │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:50 UTC │
	│ start   │ -p force-systemd-env-623432 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-623432     │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:50 UTC │
	│ start   │ -p kubernetes-upgrade-291330 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-291330    │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │                     │
	│ start   │ -p kubernetes-upgrade-291330 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-291330    │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:50 UTC │
	│ delete  │ -p kubernetes-upgrade-291330                                                                                                                                                                                                                  │ kubernetes-upgrade-291330    │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:50 UTC │
	│ start   │ -p cert-expiration-736062 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-736062       │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:51 UTC │
	│ delete  │ -p force-systemd-env-623432                                                                                                                                                                                                                   │ force-systemd-env-623432     │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:50 UTC │
	│ start   │ -p cert-options-771620 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-771620          │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:51 UTC │
	│ ssh     │ cert-options-771620 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-771620          │ jenkins │ v1.37.0 │ 25 Oct 25 10:51 UTC │ 25 Oct 25 10:51 UTC │
	│ ssh     │ -p cert-options-771620 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-771620          │ jenkins │ v1.37.0 │ 25 Oct 25 10:51 UTC │ 25 Oct 25 10:51 UTC │
	│ delete  │ -p cert-options-771620                                                                                                                                                                                                                        │ cert-options-771620          │ jenkins │ v1.37.0 │ 25 Oct 25 10:51 UTC │ 25 Oct 25 10:51 UTC │
	│ start   │ -p old-k8s-version-031983 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:51 UTC │ 25 Oct 25 10:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-031983 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:52 UTC │                     │
	│ stop    │ -p old-k8s-version-031983 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:53 UTC │ 25 Oct 25 10:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-031983 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:53 UTC │ 25 Oct 25 10:53 UTC │
	│ start   │ -p old-k8s-version-031983 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:53 UTC │ 25 Oct 25 10:54 UTC │
	│ image   │ old-k8s-version-031983 image list --format=json                                                                                                                                                                                               │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:54 UTC │
	│ pause   │ -p old-k8s-version-031983 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │                     │
	│ delete  │ -p old-k8s-version-031983                                                                                                                                                                                                                     │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:54 UTC │
	│ delete  │ -p old-k8s-version-031983                                                                                                                                                                                                                     │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:54 UTC │
	│ start   │ -p default-k8s-diff-port-223394 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:55 UTC │
	│ start   │ -p cert-expiration-736062 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-736062       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:55 UTC │
	│ delete  │ -p cert-expiration-736062                                                                                                                                                                                                                     │ cert-expiration-736062       │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │ 25 Oct 25 10:55 UTC │
	│ start   │ -p embed-certs-348342 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-223394 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:55:09
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:55:09.183413  448482 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:55:09.183528  448482 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:55:09.183540  448482 out.go:374] Setting ErrFile to fd 2...
	I1025 10:55:09.183545  448482 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:55:09.183811  448482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:55:09.184231  448482 out.go:368] Setting JSON to false
	I1025 10:55:09.185200  448482 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9461,"bootTime":1761380249,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 10:55:09.185277  448482 start.go:141] virtualization:  
	I1025 10:55:09.191267  448482 out.go:179] * [embed-certs-348342] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:55:09.194863  448482 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:55:09.194958  448482 notify.go:220] Checking for updates...
	I1025 10:55:09.201378  448482 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:55:09.204599  448482 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:55:09.207695  448482 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 10:55:09.210837  448482 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:55:09.213812  448482 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:55:09.217341  448482 config.go:182] Loaded profile config "default-k8s-diff-port-223394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:55:09.217467  448482 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:55:09.263497  448482 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:55:09.263627  448482 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:55:09.320810  448482 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:55:09.311906328 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:55:09.320926  448482 docker.go:318] overlay module found
	I1025 10:55:09.324213  448482 out.go:179] * Using the docker driver based on user configuration
	I1025 10:55:09.327253  448482 start.go:305] selected driver: docker
	I1025 10:55:09.327271  448482 start.go:925] validating driver "docker" against <nil>
	I1025 10:55:09.327284  448482 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:55:09.328022  448482 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:55:09.386785  448482 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:55:09.376764315 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:55:09.386938  448482 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 10:55:09.387166  448482 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:55:09.390278  448482 out.go:179] * Using Docker driver with root privileges
	I1025 10:55:09.393252  448482 cni.go:84] Creating CNI manager for ""
	I1025 10:55:09.393328  448482 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:55:09.393343  448482 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:55:09.393426  448482 start.go:349] cluster config:
	{Name:embed-certs-348342 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-348342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:55:09.396755  448482 out.go:179] * Starting "embed-certs-348342" primary control-plane node in "embed-certs-348342" cluster
	I1025 10:55:09.399702  448482 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:55:09.402609  448482 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:55:09.405357  448482 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:55:09.405419  448482 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 10:55:09.405432  448482 cache.go:58] Caching tarball of preloaded images
	I1025 10:55:09.405450  448482 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:55:09.405524  448482 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:55:09.405535  448482 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:55:09.405661  448482 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/config.json ...
	I1025 10:55:09.405689  448482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/config.json: {Name:mk92b0e76533cbd7bbe4a875bc7be27823f2d7e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:55:09.425443  448482 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:55:09.425472  448482 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:55:09.425491  448482 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:55:09.425527  448482 start.go:360] acquireMachinesLock for embed-certs-348342: {Name:mk6a33c3a0d7242e8af53b027ee4f0bef4d472df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:55:09.425630  448482 start.go:364] duration metric: took 82.175µs to acquireMachinesLock for "embed-certs-348342"
	I1025 10:55:09.425660  448482 start.go:93] Provisioning new machine with config: &{Name:embed-certs-348342 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-348342 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:55:09.425780  448482 start.go:125] createHost starting for "" (driver="docker")
	W1025 10:55:08.953937  444716 node_ready.go:57] node "default-k8s-diff-port-223394" has "Ready":"False" status (will retry)
	W1025 10:55:11.453935  444716 node_ready.go:57] node "default-k8s-diff-port-223394" has "Ready":"False" status (will retry)
	I1025 10:55:09.429293  448482 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:55:09.429518  448482 start.go:159] libmachine.API.Create for "embed-certs-348342" (driver="docker")
	I1025 10:55:09.429552  448482 client.go:168] LocalClient.Create starting
	I1025 10:55:09.429622  448482 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem
	I1025 10:55:09.429670  448482 main.go:141] libmachine: Decoding PEM data...
	I1025 10:55:09.429689  448482 main.go:141] libmachine: Parsing certificate...
	I1025 10:55:09.429747  448482 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem
	I1025 10:55:09.429768  448482 main.go:141] libmachine: Decoding PEM data...
	I1025 10:55:09.429781  448482 main.go:141] libmachine: Parsing certificate...
	I1025 10:55:09.430173  448482 cli_runner.go:164] Run: docker network inspect embed-certs-348342 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:55:09.446035  448482 cli_runner.go:211] docker network inspect embed-certs-348342 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:55:09.446135  448482 network_create.go:284] running [docker network inspect embed-certs-348342] to gather additional debugging logs...
	I1025 10:55:09.446157  448482 cli_runner.go:164] Run: docker network inspect embed-certs-348342
	W1025 10:55:09.463040  448482 cli_runner.go:211] docker network inspect embed-certs-348342 returned with exit code 1
	I1025 10:55:09.463084  448482 network_create.go:287] error running [docker network inspect embed-certs-348342]: docker network inspect embed-certs-348342: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-348342 not found
	I1025 10:55:09.463097  448482 network_create.go:289] output of [docker network inspect embed-certs-348342]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-348342 not found
	
	** /stderr **
	I1025 10:55:09.463210  448482 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:55:09.480376  448482 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2218a4d410c8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:56:a0:c3:54:c6:1f} reservation:<nil>}
	I1025 10:55:09.480901  448482 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-249eaf2d238d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:87:b9:4d:4c:0d} reservation:<nil>}
	I1025 10:55:09.481156  448482 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-210d4b236ff6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:d5:32:45:e6:85} reservation:<nil>}
	I1025 10:55:09.481669  448482 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a37280}
	I1025 10:55:09.481691  448482 network_create.go:124] attempt to create docker network embed-certs-348342 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 10:55:09.481763  448482 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-348342 embed-certs-348342
	I1025 10:55:09.536869  448482 network_create.go:108] docker network embed-certs-348342 192.168.76.0/24 created
	I1025 10:55:09.536900  448482 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-348342" container
	I1025 10:55:09.536977  448482 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:55:09.553616  448482 cli_runner.go:164] Run: docker volume create embed-certs-348342 --label name.minikube.sigs.k8s.io=embed-certs-348342 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:55:09.572755  448482 oci.go:103] Successfully created a docker volume embed-certs-348342
	I1025 10:55:09.572843  448482 cli_runner.go:164] Run: docker run --rm --name embed-certs-348342-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-348342 --entrypoint /usr/bin/test -v embed-certs-348342:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:55:10.164846  448482 oci.go:107] Successfully prepared a docker volume embed-certs-348342
	I1025 10:55:10.164898  448482 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:55:10.164919  448482 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 10:55:10.164992  448482 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-348342:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1025 10:55:13.953675  444716 node_ready.go:57] node "default-k8s-diff-port-223394" has "Ready":"False" status (will retry)
	W1025 10:55:15.960976  444716 node_ready.go:57] node "default-k8s-diff-port-223394" has "Ready":"False" status (will retry)
	I1025 10:55:14.584998  448482 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-348342:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.419965848s)
	I1025 10:55:14.585036  448482 kic.go:203] duration metric: took 4.420112138s to extract preloaded images to volume ...
	W1025 10:55:14.585185  448482 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 10:55:14.585307  448482 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 10:55:14.647436  448482 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-348342 --name embed-certs-348342 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-348342 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-348342 --network embed-certs-348342 --ip 192.168.76.2 --volume embed-certs-348342:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 10:55:14.961241  448482 cli_runner.go:164] Run: docker container inspect embed-certs-348342 --format={{.State.Running}}
	I1025 10:55:14.981856  448482 cli_runner.go:164] Run: docker container inspect embed-certs-348342 --format={{.State.Status}}
	I1025 10:55:15.010363  448482 cli_runner.go:164] Run: docker exec embed-certs-348342 stat /var/lib/dpkg/alternatives/iptables
	I1025 10:55:15.074302  448482 oci.go:144] the created container "embed-certs-348342" has a running status.
	I1025 10:55:15.074339  448482 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/embed-certs-348342/id_rsa...
	I1025 10:55:15.809276  448482 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-259409/.minikube/machines/embed-certs-348342/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 10:55:15.836636  448482 cli_runner.go:164] Run: docker container inspect embed-certs-348342 --format={{.State.Status}}
	I1025 10:55:15.864601  448482 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 10:55:15.864625  448482 kic_runner.go:114] Args: [docker exec --privileged embed-certs-348342 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 10:55:15.925910  448482 cli_runner.go:164] Run: docker container inspect embed-certs-348342 --format={{.State.Status}}
	I1025 10:55:15.952793  448482 machine.go:93] provisionDockerMachine start ...
	I1025 10:55:15.952885  448482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:55:15.985148  448482 main.go:141] libmachine: Using SSH client type: native
	I1025 10:55:15.985483  448482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1025 10:55:15.985493  448482 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:55:16.161855  448482 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-348342
	
	I1025 10:55:16.161941  448482 ubuntu.go:182] provisioning hostname "embed-certs-348342"
	I1025 10:55:16.162075  448482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:55:16.181864  448482 main.go:141] libmachine: Using SSH client type: native
	I1025 10:55:16.182249  448482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1025 10:55:16.182264  448482 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-348342 && echo "embed-certs-348342" | sudo tee /etc/hostname
	I1025 10:55:16.353016  448482 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-348342
	
	I1025 10:55:16.353108  448482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:55:16.374474  448482 main.go:141] libmachine: Using SSH client type: native
	I1025 10:55:16.374787  448482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1025 10:55:16.374812  448482 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-348342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-348342/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-348342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:55:16.538147  448482 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:55:16.538177  448482 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:55:16.538197  448482 ubuntu.go:190] setting up certificates
	I1025 10:55:16.538206  448482 provision.go:84] configureAuth start
	I1025 10:55:16.538266  448482 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-348342
	I1025 10:55:16.554936  448482 provision.go:143] copyHostCerts
	I1025 10:55:16.555023  448482 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:55:16.555039  448482 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:55:16.555118  448482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:55:16.555227  448482 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:55:16.555244  448482 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:55:16.555274  448482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:55:16.555343  448482 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:55:16.555353  448482 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:55:16.555379  448482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:55:16.555437  448482 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.embed-certs-348342 san=[127.0.0.1 192.168.76.2 embed-certs-348342 localhost minikube]
	I1025 10:55:17.066611  448482 provision.go:177] copyRemoteCerts
	I1025 10:55:17.066691  448482 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:55:17.066743  448482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:55:17.085436  448482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/embed-certs-348342/id_rsa Username:docker}
	I1025 10:55:17.190884  448482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:55:17.208603  448482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:55:17.225604  448482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:55:17.243940  448482 provision.go:87] duration metric: took 705.70967ms to configureAuth
	I1025 10:55:17.243967  448482 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:55:17.244165  448482 config.go:182] Loaded profile config "embed-certs-348342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:55:17.244280  448482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:55:17.264784  448482 main.go:141] libmachine: Using SSH client type: native
	I1025 10:55:17.265160  448482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1025 10:55:17.265188  448482 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:55:17.533434  448482 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:55:17.533455  448482 machine.go:96] duration metric: took 1.580643465s to provisionDockerMachine
	I1025 10:55:17.533466  448482 client.go:171] duration metric: took 8.103903979s to LocalClient.Create
	I1025 10:55:17.533479  448482 start.go:167] duration metric: took 8.103962779s to libmachine.API.Create "embed-certs-348342"
	I1025 10:55:17.533487  448482 start.go:293] postStartSetup for "embed-certs-348342" (driver="docker")
	I1025 10:55:17.533501  448482 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:55:17.533561  448482 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:55:17.533611  448482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:55:17.552969  448482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/embed-certs-348342/id_rsa Username:docker}
	I1025 10:55:17.658257  448482 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:55:17.661589  448482 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:55:17.661616  448482 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:55:17.661628  448482 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:55:17.661696  448482 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:55:17.661777  448482 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:55:17.661880  448482 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:55:17.671216  448482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:55:17.693958  448482 start.go:296] duration metric: took 160.453124ms for postStartSetup
	I1025 10:55:17.694345  448482 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-348342
	I1025 10:55:17.711822  448482 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/config.json ...
	I1025 10:55:17.712105  448482 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:55:17.712165  448482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:55:17.731405  448482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/embed-certs-348342/id_rsa Username:docker}
	I1025 10:55:17.838940  448482 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:55:17.844625  448482 start.go:128] duration metric: took 8.418831105s to createHost
	I1025 10:55:17.844647  448482 start.go:83] releasing machines lock for "embed-certs-348342", held for 8.419004054s
	I1025 10:55:17.844718  448482 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-348342
	I1025 10:55:17.862368  448482 ssh_runner.go:195] Run: cat /version.json
	I1025 10:55:17.862374  448482 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:55:17.862432  448482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:55:17.862472  448482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:55:17.883546  448482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/embed-certs-348342/id_rsa Username:docker}
	I1025 10:55:17.891547  448482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/embed-certs-348342/id_rsa Username:docker}
	I1025 10:55:18.076190  448482 ssh_runner.go:195] Run: systemctl --version
	I1025 10:55:18.082922  448482 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:55:18.121975  448482 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:55:18.126384  448482 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:55:18.126530  448482 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:55:18.161942  448482 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 10:55:18.161969  448482 start.go:495] detecting cgroup driver to use...
	I1025 10:55:18.162077  448482 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:55:18.162130  448482 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:55:18.179840  448482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:55:18.193840  448482 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:55:18.193907  448482 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:55:18.211987  448482 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:55:18.230996  448482 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:55:18.357114  448482 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:55:18.509373  448482 docker.go:234] disabling docker service ...
	I1025 10:55:18.509485  448482 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:55:18.534790  448482 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:55:18.549582  448482 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:55:18.677491  448482 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:55:18.797673  448482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:55:18.811951  448482 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:55:18.828280  448482 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:55:18.828361  448482 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:55:18.837830  448482 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:55:18.837905  448482 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:55:18.847720  448482 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:55:18.857041  448482 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:55:18.866182  448482 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:55:18.874934  448482 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:55:18.884243  448482 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:55:18.899321  448482 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:55:18.909104  448482 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:55:18.917620  448482 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:55:18.925830  448482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:55:19.045458  448482 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:55:19.193107  448482 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:55:19.193182  448482 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:55:19.197552  448482 start.go:563] Will wait 60s for crictl version
	I1025 10:55:19.197625  448482 ssh_runner.go:195] Run: which crictl
	I1025 10:55:19.201412  448482 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:55:19.231309  448482 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:55:19.231398  448482 ssh_runner.go:195] Run: crio --version
	I1025 10:55:19.265491  448482 ssh_runner.go:195] Run: crio --version
	I1025 10:55:19.298556  448482 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:55:19.301406  448482 cli_runner.go:164] Run: docker network inspect embed-certs-348342 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:55:19.320292  448482 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 10:55:19.324918  448482 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:55:19.335818  448482 kubeadm.go:883] updating cluster {Name:embed-certs-348342 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-348342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:55:19.335934  448482 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:55:19.335992  448482 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:55:19.370135  448482 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:55:19.370165  448482 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:55:19.370228  448482 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:55:19.407534  448482 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:55:19.407562  448482 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:55:19.407570  448482 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1025 10:55:19.407657  448482 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-348342 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-348342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:55:19.407784  448482 ssh_runner.go:195] Run: crio config
	I1025 10:55:19.479971  448482 cni.go:84] Creating CNI manager for ""
	I1025 10:55:19.480036  448482 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:55:19.480086  448482 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:55:19.480142  448482 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-348342 NodeName:embed-certs-348342 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:55:19.480319  448482 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-348342"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:55:19.480440  448482 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:55:19.489442  448482 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:55:19.489539  448482 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:55:19.497859  448482 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1025 10:55:19.512097  448482 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:55:19.526491  448482 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1025 10:55:19.540638  448482 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:55:19.544387  448482 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:55:19.554790  448482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:55:19.679617  448482 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:55:19.697628  448482 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342 for IP: 192.168.76.2
	I1025 10:55:19.697698  448482 certs.go:195] generating shared ca certs ...
	I1025 10:55:19.697732  448482 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:55:19.697912  448482 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:55:19.698024  448482 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:55:19.698053  448482 certs.go:257] generating profile certs ...
	I1025 10:55:19.698141  448482 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/client.key
	I1025 10:55:19.698175  448482 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/client.crt with IP's: []
	I1025 10:55:20.979085  448482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/client.crt ...
	I1025 10:55:20.979119  448482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/client.crt: {Name:mk3bbe54048a68009d4314dfc8fe79595d0d87d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:55:20.979319  448482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/client.key ...
	I1025 10:55:20.979334  448482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/client.key: {Name:mk801a2739985643b7ea8f63059cdf68c1910083 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:55:20.979439  448482 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/apiserver.key.6c3cab22
	I1025 10:55:20.979458  448482 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/apiserver.crt.6c3cab22 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1025 10:55:21.232365  448482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/apiserver.crt.6c3cab22 ...
	I1025 10:55:21.232396  448482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/apiserver.crt.6c3cab22: {Name:mka7e6dbf66d931363d1ce2347dcfa4ca2225f67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:55:21.232586  448482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/apiserver.key.6c3cab22 ...
	I1025 10:55:21.232603  448482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/apiserver.key.6c3cab22: {Name:mkaa02b84c88e0b7ab345282b3fc772b7ea50a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:55:21.232701  448482 certs.go:382] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/apiserver.crt.6c3cab22 -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/apiserver.crt
	I1025 10:55:21.232782  448482 certs.go:386] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/apiserver.key.6c3cab22 -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/apiserver.key
	I1025 10:55:21.232845  448482 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/proxy-client.key
	I1025 10:55:21.232863  448482 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/proxy-client.crt with IP's: []
	I1025 10:55:21.529953  448482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/proxy-client.crt ...
	I1025 10:55:21.530002  448482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/proxy-client.crt: {Name:mk4062f811466695448c10abc4d77bb1cc50d285 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:55:21.530183  448482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/proxy-client.key ...
	I1025 10:55:21.530200  448482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/proxy-client.key: {Name:mk22013458eba56e9e7ecc4ebcd448d008af1e52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:55:21.530392  448482 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:55:21.530435  448482 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:55:21.530456  448482 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:55:21.530483  448482 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:55:21.530510  448482 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:55:21.530532  448482 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:55:21.530573  448482 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:55:21.531155  448482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:55:21.550451  448482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:55:21.570053  448482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:55:21.588487  448482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:55:21.608566  448482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 10:55:21.629814  448482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:55:21.648426  448482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:55:21.670672  448482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1025 10:55:21.694046  448482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:55:21.715113  448482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:55:21.740640  448482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:55:21.760969  448482 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:55:21.775528  448482 ssh_runner.go:195] Run: openssl version
	I1025 10:55:21.783251  448482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:55:21.795762  448482 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:55:21.800256  448482 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:55:21.800342  448482 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:55:21.842468  448482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:55:21.852131  448482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:55:21.860674  448482 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:55:21.864622  448482 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:55:21.864701  448482 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:55:21.906398  448482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:55:21.915224  448482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:55:21.927193  448482 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:55:21.931699  448482 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:55:21.931787  448482 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:55:21.977113  448482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:55:21.986019  448482 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:55:21.991374  448482 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 10:55:21.991431  448482 kubeadm.go:400] StartCluster: {Name:embed-certs-348342 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-348342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:55:21.991508  448482 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:55:21.991566  448482 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:55:22.029907  448482 cri.go:89] found id: ""
	I1025 10:55:22.030016  448482 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:55:22.039231  448482 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:55:22.048805  448482 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 10:55:22.048882  448482 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:55:22.058303  448482 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 10:55:22.058378  448482 kubeadm.go:157] found existing configuration files:
	
	I1025 10:55:22.058489  448482 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 10:55:22.067205  448482 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 10:55:22.067290  448482 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:55:22.075402  448482 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 10:55:22.084166  448482 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 10:55:22.084246  448482 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:55:22.092397  448482 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 10:55:22.101272  448482 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 10:55:22.101372  448482 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:55:22.109580  448482 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 10:55:22.117828  448482 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 10:55:22.117925  448482 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:55:22.126532  448482 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 10:55:22.172538  448482 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 10:55:22.172850  448482 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:55:22.202389  448482 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 10:55:22.202473  448482 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 10:55:22.202517  448482 kubeadm.go:318] OS: Linux
	I1025 10:55:22.202570  448482 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 10:55:22.202626  448482 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 10:55:22.202680  448482 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 10:55:22.202735  448482 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 10:55:22.202790  448482 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 10:55:22.202844  448482 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 10:55:22.202896  448482 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 10:55:22.202949  448482 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 10:55:22.203002  448482 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 10:55:22.275814  448482 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:55:22.275940  448482 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:55:22.276042  448482 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 10:55:22.285535  448482 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1025 10:55:18.454097  444716 node_ready.go:57] node "default-k8s-diff-port-223394" has "Ready":"False" status (will retry)
	W1025 10:55:20.456182  444716 node_ready.go:57] node "default-k8s-diff-port-223394" has "Ready":"False" status (will retry)
	I1025 10:55:22.292111  448482 out.go:252]   - Generating certificates and keys ...
	I1025 10:55:22.292252  448482 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 10:55:22.292336  448482 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 10:55:22.580490  448482 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 10:55:22.752795  448482 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 10:55:23.030334  448482 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 10:55:23.414191  448482 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	W1025 10:55:22.955210  444716 node_ready.go:57] node "default-k8s-diff-port-223394" has "Ready":"False" status (will retry)
	W1025 10:55:25.455179  444716 node_ready.go:57] node "default-k8s-diff-port-223394" has "Ready":"False" status (will retry)
	W1025 10:55:27.455829  444716 node_ready.go:57] node "default-k8s-diff-port-223394" has "Ready":"False" status (will retry)
	I1025 10:55:24.494705  448482 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:55:24.495074  448482 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-348342 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 10:55:24.698141  448482 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:55:24.698503  448482 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-348342 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 10:55:25.571484  448482 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:55:25.805746  448482 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 10:55:26.041060  448482 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 10:55:26.041424  448482 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 10:55:26.525885  448482 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 10:55:26.857001  448482 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 10:55:27.244852  448482 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 10:55:27.660101  448482 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 10:55:28.261953  448482 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 10:55:28.262716  448482 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 10:55:28.265386  448482 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 10:55:28.268797  448482 out.go:252]   - Booting up control plane ...
	I1025 10:55:28.268906  448482 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 10:55:28.268988  448482 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 10:55:28.269757  448482 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 10:55:28.288432  448482 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 10:55:28.288819  448482 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 10:55:28.297510  448482 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 10:55:28.297884  448482 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 10:55:28.298520  448482 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 10:55:28.439899  448482 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 10:55:28.440028  448482 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1025 10:55:29.954296  444716 node_ready.go:57] node "default-k8s-diff-port-223394" has "Ready":"False" status (will retry)
	W1025 10:55:31.954822  444716 node_ready.go:57] node "default-k8s-diff-port-223394" has "Ready":"False" status (will retry)
	I1025 10:55:29.940374  448482 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500795609s
	I1025 10:55:29.945319  448482 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 10:55:29.945505  448482 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1025 10:55:29.945635  448482 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 10:55:29.945810  448482 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 10:55:31.475865  448482 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.52949735s
	I1025 10:55:34.616278  448482 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.671054791s
	I1025 10:55:35.446821  448482 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.501530602s
	I1025 10:55:35.473904  448482 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 10:55:35.494573  448482 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 10:55:35.518993  448482 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 10:55:35.519512  448482 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-348342 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 10:55:35.555586  448482 kubeadm.go:318] [bootstrap-token] Using token: mlvxaw.3e3zu4r9bb53mbho
	I1025 10:55:35.558730  448482 out.go:252]   - Configuring RBAC rules ...
	I1025 10:55:35.558863  448482 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 10:55:35.570983  448482 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 10:55:35.580885  448482 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 10:55:35.587373  448482 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 10:55:35.592782  448482 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 10:55:35.598058  448482 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 10:55:35.854017  448482 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 10:55:36.324937  448482 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 10:55:36.853647  448482 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 10:55:36.854742  448482 kubeadm.go:318] 
	I1025 10:55:36.854819  448482 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 10:55:36.854831  448482 kubeadm.go:318] 
	I1025 10:55:36.854913  448482 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 10:55:36.854923  448482 kubeadm.go:318] 
	I1025 10:55:36.854964  448482 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 10:55:36.855033  448482 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 10:55:36.855094  448482 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 10:55:36.855103  448482 kubeadm.go:318] 
	I1025 10:55:36.855166  448482 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 10:55:36.855177  448482 kubeadm.go:318] 
	I1025 10:55:36.855226  448482 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 10:55:36.855237  448482 kubeadm.go:318] 
	I1025 10:55:36.855293  448482 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 10:55:36.855376  448482 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 10:55:36.855454  448482 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 10:55:36.855482  448482 kubeadm.go:318] 
	I1025 10:55:36.855575  448482 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 10:55:36.855658  448482 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 10:55:36.855666  448482 kubeadm.go:318] 
	I1025 10:55:36.855754  448482 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token mlvxaw.3e3zu4r9bb53mbho \
	I1025 10:55:36.855867  448482 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:537c864aeffbd98a53a718d258f7bd1f28aa4a0d15716971a35abdfdaaeb28d5 \
	I1025 10:55:36.855891  448482 kubeadm.go:318] 	--control-plane 
	I1025 10:55:36.855900  448482 kubeadm.go:318] 
	I1025 10:55:36.855998  448482 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 10:55:36.856007  448482 kubeadm.go:318] 
	I1025 10:55:36.856094  448482 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token mlvxaw.3e3zu4r9bb53mbho \
	I1025 10:55:36.856205  448482 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:537c864aeffbd98a53a718d258f7bd1f28aa4a0d15716971a35abdfdaaeb28d5 
	I1025 10:55:36.861292  448482 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1025 10:55:36.861534  448482 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 10:55:36.861648  448482 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 10:55:36.861672  448482 cni.go:84] Creating CNI manager for ""
	I1025 10:55:36.861683  448482 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:55:36.864809  448482 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1025 10:55:34.454560  444716 node_ready.go:57] node "default-k8s-diff-port-223394" has "Ready":"False" status (will retry)
	W1025 10:55:36.953600  444716 node_ready.go:57] node "default-k8s-diff-port-223394" has "Ready":"False" status (will retry)
	I1025 10:55:36.867683  448482 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 10:55:36.872016  448482 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 10:55:36.872036  448482 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 10:55:36.888260  448482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 10:55:37.205588  448482 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 10:55:37.205655  448482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:55:37.205778  448482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-348342 minikube.k8s.io/updated_at=2025_10_25T10_55_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689 minikube.k8s.io/name=embed-certs-348342 minikube.k8s.io/primary=true
	I1025 10:55:37.349076  448482 ops.go:34] apiserver oom_adj: -16
	I1025 10:55:37.349249  448482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:55:37.849763  448482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:55:38.350121  448482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:55:38.849769  448482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:55:39.349726  448482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:55:39.850263  448482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:55:40.349397  448482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:55:40.849739  448482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:55:41.349936  448482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:55:41.441347  448482 kubeadm.go:1113] duration metric: took 4.235751749s to wait for elevateKubeSystemPrivileges
	I1025 10:55:41.441374  448482 kubeadm.go:402] duration metric: took 19.449946669s to StartCluster
	I1025 10:55:41.441390  448482 settings.go:142] acquiring lock: {Name:mk78071aea90144fb2e8f63b90de4792d7a3522a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:55:41.441453  448482 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:55:41.442874  448482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:55:41.443089  448482 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:55:41.443243  448482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 10:55:41.443497  448482 config.go:182] Loaded profile config "embed-certs-348342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:55:41.443529  448482 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:55:41.443591  448482 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-348342"
	I1025 10:55:41.443605  448482 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-348342"
	I1025 10:55:41.443626  448482 host.go:66] Checking if "embed-certs-348342" exists ...
	I1025 10:55:41.444235  448482 addons.go:69] Setting default-storageclass=true in profile "embed-certs-348342"
	I1025 10:55:41.444262  448482 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-348342"
	I1025 10:55:41.444417  448482 cli_runner.go:164] Run: docker container inspect embed-certs-348342 --format={{.State.Status}}
	I1025 10:55:41.444561  448482 cli_runner.go:164] Run: docker container inspect embed-certs-348342 --format={{.State.Status}}
	I1025 10:55:41.448248  448482 out.go:179] * Verifying Kubernetes components...
	I1025 10:55:41.453499  448482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:55:41.489049  448482 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1025 10:55:38.954460  444716 node_ready.go:57] node "default-k8s-diff-port-223394" has "Ready":"False" status (will retry)
	W1025 10:55:41.462390  444716 node_ready.go:57] node "default-k8s-diff-port-223394" has "Ready":"False" status (will retry)
	I1025 10:55:41.489548  448482 addons.go:238] Setting addon default-storageclass=true in "embed-certs-348342"
	I1025 10:55:41.489588  448482 host.go:66] Checking if "embed-certs-348342" exists ...
	I1025 10:55:41.490165  448482 cli_runner.go:164] Run: docker container inspect embed-certs-348342 --format={{.State.Status}}
	I1025 10:55:41.492289  448482 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:55:41.492315  448482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:55:41.492388  448482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:55:41.529370  448482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/embed-certs-348342/id_rsa Username:docker}
	I1025 10:55:41.538345  448482 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:55:41.538367  448482 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:55:41.538443  448482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:55:41.567627  448482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/embed-certs-348342/id_rsa Username:docker}
	I1025 10:55:41.827152  448482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:55:41.871245  448482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 10:55:41.871452  448482 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:55:41.892715  448482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:55:42.695642  448482 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1025 10:55:42.699580  448482 node_ready.go:35] waiting up to 6m0s for node "embed-certs-348342" to be "Ready" ...
	I1025 10:55:42.753596  448482 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 10:55:42.756512  448482 addons.go:514] duration metric: took 1.312956206s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 10:55:43.200098  448482 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-348342" context rescaled to 1 replicas
	W1025 10:55:43.954151  444716 node_ready.go:57] node "default-k8s-diff-port-223394" has "Ready":"False" status (will retry)
	I1025 10:55:44.454222  444716 node_ready.go:49] node "default-k8s-diff-port-223394" is "Ready"
	I1025 10:55:44.454257  444716 node_ready.go:38] duration metric: took 39.503391168s for node "default-k8s-diff-port-223394" to be "Ready" ...
	I1025 10:55:44.454271  444716 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:55:44.454331  444716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:55:44.471527  444716 api_server.go:72] duration metric: took 41.865079299s to wait for apiserver process to appear ...
	I1025 10:55:44.471553  444716 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:55:44.471574  444716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1025 10:55:44.492120  444716 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1025 10:55:44.493374  444716 api_server.go:141] control plane version: v1.34.1
	I1025 10:55:44.493396  444716 api_server.go:131] duration metric: took 21.836465ms to wait for apiserver health ...
	I1025 10:55:44.493405  444716 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:55:44.503401  444716 system_pods.go:59] 8 kube-system pods found
	I1025 10:55:44.503429  444716 system_pods.go:61] "coredns-66bc5c9577-w9r8g" [83c72429-725c-4c35-bb11-105ba8c376f7] Pending
	I1025 10:55:44.503436  444716 system_pods.go:61] "etcd-default-k8s-diff-port-223394" [ab9ee066-36cf-49d8-8df1-322149f30734] Running
	I1025 10:55:44.503440  444716 system_pods.go:61] "kindnet-tclvn" [d1cfc858-b152-45a9-be90-74dff3a44e56] Running
	I1025 10:55:44.503445  444716 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-223394" [d04401de-dd89-47b6-9578-8ebdda939aa5] Running
	I1025 10:55:44.503450  444716 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-223394" [7be42c7d-33c1-4768-b784-bb82322b2638] Running
	I1025 10:55:44.503454  444716 system_pods.go:61] "kube-proxy-zpq57" [ec42d09b-7c2e-41f3-a944-c0551a0a9c52] Running
	I1025 10:55:44.503458  444716 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-223394" [0e489dec-def1-4bb4-b635-18f453c541d0] Running
	I1025 10:55:44.503464  444716 system_pods.go:61] "storage-provisioner" [8eadd837-3c03-4cd3-97cb-5f7664d9620a] Pending
	I1025 10:55:44.503471  444716 system_pods.go:74] duration metric: took 10.058373ms to wait for pod list to return data ...
	I1025 10:55:44.503478  444716 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:55:44.509748  444716 default_sa.go:45] found service account: "default"
	I1025 10:55:44.509778  444716 default_sa.go:55] duration metric: took 6.293941ms for default service account to be created ...
	I1025 10:55:44.509789  444716 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:55:44.514943  444716 system_pods.go:86] 8 kube-system pods found
	I1025 10:55:44.514988  444716 system_pods.go:89] "coredns-66bc5c9577-w9r8g" [83c72429-725c-4c35-bb11-105ba8c376f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:55:44.514997  444716 system_pods.go:89] "etcd-default-k8s-diff-port-223394" [ab9ee066-36cf-49d8-8df1-322149f30734] Running
	I1025 10:55:44.515004  444716 system_pods.go:89] "kindnet-tclvn" [d1cfc858-b152-45a9-be90-74dff3a44e56] Running
	I1025 10:55:44.515009  444716 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-223394" [d04401de-dd89-47b6-9578-8ebdda939aa5] Running
	I1025 10:55:44.515014  444716 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-223394" [7be42c7d-33c1-4768-b784-bb82322b2638] Running
	I1025 10:55:44.515019  444716 system_pods.go:89] "kube-proxy-zpq57" [ec42d09b-7c2e-41f3-a944-c0551a0a9c52] Running
	I1025 10:55:44.515025  444716 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-223394" [0e489dec-def1-4bb4-b635-18f453c541d0] Running
	I1025 10:55:44.515029  444716 system_pods.go:89] "storage-provisioner" [8eadd837-3c03-4cd3-97cb-5f7664d9620a] Pending
	I1025 10:55:44.515049  444716 retry.go:31] will retry after 286.223229ms: missing components: kube-dns
	I1025 10:55:44.805331  444716 system_pods.go:86] 8 kube-system pods found
	I1025 10:55:44.805366  444716 system_pods.go:89] "coredns-66bc5c9577-w9r8g" [83c72429-725c-4c35-bb11-105ba8c376f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:55:44.805374  444716 system_pods.go:89] "etcd-default-k8s-diff-port-223394" [ab9ee066-36cf-49d8-8df1-322149f30734] Running
	I1025 10:55:44.805380  444716 system_pods.go:89] "kindnet-tclvn" [d1cfc858-b152-45a9-be90-74dff3a44e56] Running
	I1025 10:55:44.805385  444716 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-223394" [d04401de-dd89-47b6-9578-8ebdda939aa5] Running
	I1025 10:55:44.805389  444716 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-223394" [7be42c7d-33c1-4768-b784-bb82322b2638] Running
	I1025 10:55:44.805395  444716 system_pods.go:89] "kube-proxy-zpq57" [ec42d09b-7c2e-41f3-a944-c0551a0a9c52] Running
	I1025 10:55:44.805399  444716 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-223394" [0e489dec-def1-4bb4-b635-18f453c541d0] Running
	I1025 10:55:44.805406  444716 system_pods.go:89] "storage-provisioner" [8eadd837-3c03-4cd3-97cb-5f7664d9620a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:55:44.805430  444716 retry.go:31] will retry after 378.171465ms: missing components: kube-dns
	I1025 10:55:45.197613  444716 system_pods.go:86] 8 kube-system pods found
	I1025 10:55:45.197653  444716 system_pods.go:89] "coredns-66bc5c9577-w9r8g" [83c72429-725c-4c35-bb11-105ba8c376f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:55:45.197661  444716 system_pods.go:89] "etcd-default-k8s-diff-port-223394" [ab9ee066-36cf-49d8-8df1-322149f30734] Running
	I1025 10:55:45.197669  444716 system_pods.go:89] "kindnet-tclvn" [d1cfc858-b152-45a9-be90-74dff3a44e56] Running
	I1025 10:55:45.197674  444716 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-223394" [d04401de-dd89-47b6-9578-8ebdda939aa5] Running
	I1025 10:55:45.197680  444716 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-223394" [7be42c7d-33c1-4768-b784-bb82322b2638] Running
	I1025 10:55:45.197685  444716 system_pods.go:89] "kube-proxy-zpq57" [ec42d09b-7c2e-41f3-a944-c0551a0a9c52] Running
	I1025 10:55:45.197690  444716 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-223394" [0e489dec-def1-4bb4-b635-18f453c541d0] Running
	I1025 10:55:45.197697  444716 system_pods.go:89] "storage-provisioner" [8eadd837-3c03-4cd3-97cb-5f7664d9620a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:55:45.197720  444716 retry.go:31] will retry after 297.89823ms: missing components: kube-dns
	I1025 10:55:45.500395  444716 system_pods.go:86] 8 kube-system pods found
	I1025 10:55:45.500476  444716 system_pods.go:89] "coredns-66bc5c9577-w9r8g" [83c72429-725c-4c35-bb11-105ba8c376f7] Running
	I1025 10:55:45.500491  444716 system_pods.go:89] "etcd-default-k8s-diff-port-223394" [ab9ee066-36cf-49d8-8df1-322149f30734] Running
	I1025 10:55:45.500499  444716 system_pods.go:89] "kindnet-tclvn" [d1cfc858-b152-45a9-be90-74dff3a44e56] Running
	I1025 10:55:45.500506  444716 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-223394" [d04401de-dd89-47b6-9578-8ebdda939aa5] Running
	I1025 10:55:45.500510  444716 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-223394" [7be42c7d-33c1-4768-b784-bb82322b2638] Running
	I1025 10:55:45.500514  444716 system_pods.go:89] "kube-proxy-zpq57" [ec42d09b-7c2e-41f3-a944-c0551a0a9c52] Running
	I1025 10:55:45.500519  444716 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-223394" [0e489dec-def1-4bb4-b635-18f453c541d0] Running
	I1025 10:55:45.500523  444716 system_pods.go:89] "storage-provisioner" [8eadd837-3c03-4cd3-97cb-5f7664d9620a] Running
	I1025 10:55:45.500532  444716 system_pods.go:126] duration metric: took 990.736366ms to wait for k8s-apps to be running ...
	I1025 10:55:45.500542  444716 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:55:45.500610  444716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:55:45.518311  444716 system_svc.go:56] duration metric: took 17.757454ms WaitForService to wait for kubelet
	I1025 10:55:45.518357  444716 kubeadm.go:586] duration metric: took 42.911913883s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:55:45.518383  444716 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:55:45.524256  444716 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:55:45.524302  444716 node_conditions.go:123] node cpu capacity is 2
	I1025 10:55:45.524318  444716 node_conditions.go:105] duration metric: took 5.928243ms to run NodePressure ...
	I1025 10:55:45.524330  444716 start.go:241] waiting for startup goroutines ...
	I1025 10:55:45.524338  444716 start.go:246] waiting for cluster config update ...
	I1025 10:55:45.524350  444716 start.go:255] writing updated cluster config ...
	I1025 10:55:45.524716  444716 ssh_runner.go:195] Run: rm -f paused
	I1025 10:55:45.528735  444716 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:55:45.534774  444716 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w9r8g" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:55:45.540614  444716 pod_ready.go:94] pod "coredns-66bc5c9577-w9r8g" is "Ready"
	I1025 10:55:45.540647  444716 pod_ready.go:86] duration metric: took 5.838125ms for pod "coredns-66bc5c9577-w9r8g" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:55:45.545258  444716 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-223394" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:55:45.551098  444716 pod_ready.go:94] pod "etcd-default-k8s-diff-port-223394" is "Ready"
	I1025 10:55:45.551131  444716 pod_ready.go:86] duration metric: took 5.792291ms for pod "etcd-default-k8s-diff-port-223394" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:55:45.553861  444716 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-223394" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:55:45.559177  444716 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-223394" is "Ready"
	I1025 10:55:45.559209  444716 pod_ready.go:86] duration metric: took 5.316717ms for pod "kube-apiserver-default-k8s-diff-port-223394" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:55:45.561778  444716 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-223394" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:55:45.933478  444716 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-223394" is "Ready"
	I1025 10:55:45.933508  444716 pod_ready.go:86] duration metric: took 371.702873ms for pod "kube-controller-manager-default-k8s-diff-port-223394" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:55:46.135055  444716 pod_ready.go:83] waiting for pod "kube-proxy-zpq57" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:55:46.533139  444716 pod_ready.go:94] pod "kube-proxy-zpq57" is "Ready"
	I1025 10:55:46.533168  444716 pod_ready.go:86] duration metric: took 398.034459ms for pod "kube-proxy-zpq57" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:55:46.734143  444716 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-223394" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:55:47.135683  444716 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-223394" is "Ready"
	I1025 10:55:47.135710  444716 pod_ready.go:86] duration metric: took 401.495717ms for pod "kube-scheduler-default-k8s-diff-port-223394" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:55:47.135723  444716 pod_ready.go:40] duration metric: took 1.606951502s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:55:47.194564  444716 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:55:47.198118  444716 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-223394" cluster and "default" namespace by default
	W1025 10:55:44.702503  448482 node_ready.go:57] node "embed-certs-348342" has "Ready":"False" status (will retry)
	W1025 10:55:47.206406  448482 node_ready.go:57] node "embed-certs-348342" has "Ready":"False" status (will retry)
	W1025 10:55:49.702550  448482 node_ready.go:57] node "embed-certs-348342" has "Ready":"False" status (will retry)
	W1025 10:55:51.703401  448482 node_ready.go:57] node "embed-certs-348342" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 25 10:55:44 default-k8s-diff-port-223394 crio[837]: time="2025-10-25T10:55:44.938794542Z" level=info msg="Created container 15a138bca22d784e0423c73e2b06be8877805333bb99db9dc7f3360e82a2b66f: kube-system/coredns-66bc5c9577-w9r8g/coredns" id=72cc52f3-958d-4792-8b09-79e606779ce1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:55:44 default-k8s-diff-port-223394 crio[837]: time="2025-10-25T10:55:44.940799754Z" level=info msg="Starting container: 15a138bca22d784e0423c73e2b06be8877805333bb99db9dc7f3360e82a2b66f" id=13f8432a-623f-4cd8-b62d-d475fa64d734 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:55:44 default-k8s-diff-port-223394 crio[837]: time="2025-10-25T10:55:44.943145171Z" level=info msg="Started container" PID=1707 containerID=15a138bca22d784e0423c73e2b06be8877805333bb99db9dc7f3360e82a2b66f description=kube-system/coredns-66bc5c9577-w9r8g/coredns id=13f8432a-623f-4cd8-b62d-d475fa64d734 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ee0fc55f062301eec2b750d1b8d7be048daa3e6fd07283ec3f85eaa651d6ad7d
	Oct 25 10:55:47 default-k8s-diff-port-223394 crio[837]: time="2025-10-25T10:55:47.734607512Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4ea1dde6-b26c-4abd-87a7-d42b3522e0af name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:55:47 default-k8s-diff-port-223394 crio[837]: time="2025-10-25T10:55:47.734675755Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:55:47 default-k8s-diff-port-223394 crio[837]: time="2025-10-25T10:55:47.748284494Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:503fd82b41e0d1178642e454d5980f8ba4485d00eb62d8a1ec909b0cebe3bb4a UID:9fe5f74f-d071-4f2d-8540-22336c347abd NetNS:/var/run/netns/db98bcf1-da74-49d9-83e0-ef212298c61f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079528}] Aliases:map[]}"
	Oct 25 10:55:47 default-k8s-diff-port-223394 crio[837]: time="2025-10-25T10:55:47.748474354Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 25 10:55:47 default-k8s-diff-port-223394 crio[837]: time="2025-10-25T10:55:47.757967643Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:503fd82b41e0d1178642e454d5980f8ba4485d00eb62d8a1ec909b0cebe3bb4a UID:9fe5f74f-d071-4f2d-8540-22336c347abd NetNS:/var/run/netns/db98bcf1-da74-49d9-83e0-ef212298c61f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079528}] Aliases:map[]}"
	Oct 25 10:55:47 default-k8s-diff-port-223394 crio[837]: time="2025-10-25T10:55:47.759058803Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 25 10:55:47 default-k8s-diff-port-223394 crio[837]: time="2025-10-25T10:55:47.762614413Z" level=info msg="Ran pod sandbox 503fd82b41e0d1178642e454d5980f8ba4485d00eb62d8a1ec909b0cebe3bb4a with infra container: default/busybox/POD" id=4ea1dde6-b26c-4abd-87a7-d42b3522e0af name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:55:47 default-k8s-diff-port-223394 crio[837]: time="2025-10-25T10:55:47.764724537Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2556cb5f-0352-4cf5-bc20-bc8d5d7e766f name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:55:47 default-k8s-diff-port-223394 crio[837]: time="2025-10-25T10:55:47.764889109Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=2556cb5f-0352-4cf5-bc20-bc8d5d7e766f name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:55:47 default-k8s-diff-port-223394 crio[837]: time="2025-10-25T10:55:47.764948867Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=2556cb5f-0352-4cf5-bc20-bc8d5d7e766f name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:55:47 default-k8s-diff-port-223394 crio[837]: time="2025-10-25T10:55:47.770460187Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=38f50056-04a6-4610-a633-1a08fd113dbd name=/runtime.v1.ImageService/PullImage
	Oct 25 10:55:47 default-k8s-diff-port-223394 crio[837]: time="2025-10-25T10:55:47.774668652Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 25 10:55:49 default-k8s-diff-port-223394 crio[837]: time="2025-10-25T10:55:49.831316496Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=38f50056-04a6-4610-a633-1a08fd113dbd name=/runtime.v1.ImageService/PullImage
	Oct 25 10:55:49 default-k8s-diff-port-223394 crio[837]: time="2025-10-25T10:55:49.832469245Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7ff4a5f5-6c8d-4e14-9ebc-e99219985d79 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:55:49 default-k8s-diff-port-223394 crio[837]: time="2025-10-25T10:55:49.835615489Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=43f678e1-c151-4fcd-8656-ade2e6944e16 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:55:49 default-k8s-diff-port-223394 crio[837]: time="2025-10-25T10:55:49.841962016Z" level=info msg="Creating container: default/busybox/busybox" id=3e1171b4-016c-4ebe-a84b-84893912e886 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:55:49 default-k8s-diff-port-223394 crio[837]: time="2025-10-25T10:55:49.842139413Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:55:49 default-k8s-diff-port-223394 crio[837]: time="2025-10-25T10:55:49.848449714Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:55:49 default-k8s-diff-port-223394 crio[837]: time="2025-10-25T10:55:49.849017597Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:55:49 default-k8s-diff-port-223394 crio[837]: time="2025-10-25T10:55:49.866931827Z" level=info msg="Created container f1d8b6089facd57f07a5d4ffc2ab6cd737ca499955a2c93549a0268e2b23b55d: default/busybox/busybox" id=3e1171b4-016c-4ebe-a84b-84893912e886 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:55:49 default-k8s-diff-port-223394 crio[837]: time="2025-10-25T10:55:49.867685057Z" level=info msg="Starting container: f1d8b6089facd57f07a5d4ffc2ab6cd737ca499955a2c93549a0268e2b23b55d" id=c1cec69a-9704-4215-9e9f-82fc8c3aa70b name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:55:49 default-k8s-diff-port-223394 crio[837]: time="2025-10-25T10:55:49.872169331Z" level=info msg="Started container" PID=1764 containerID=f1d8b6089facd57f07a5d4ffc2ab6cd737ca499955a2c93549a0268e2b23b55d description=default/busybox/busybox id=c1cec69a-9704-4215-9e9f-82fc8c3aa70b name=/runtime.v1.RuntimeService/StartContainer sandboxID=503fd82b41e0d1178642e454d5980f8ba4485d00eb62d8a1ec909b0cebe3bb4a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	f1d8b6089facd       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   503fd82b41e0d       busybox                                                default
	15a138bca22d7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   ee0fc55f06230       coredns-66bc5c9577-w9r8g                               kube-system
	8ed23fa29a7be       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   90b14301aca7e       storage-provisioner                                    kube-system
	dd822dcd22495       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   2e4c017737db8       kube-proxy-zpq57                                       kube-system
	1b83fed847fa0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      53 seconds ago       Running             kindnet-cni               0                   90f358a7de1a6       kindnet-tclvn                                          kube-system
	9a0732387aea1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   731d06624c4a4       etcd-default-k8s-diff-port-223394                      kube-system
	a50621761c608       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   c369b1ffe8ed0       kube-controller-manager-default-k8s-diff-port-223394   kube-system
	3a2ad2bbbd42f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   bcae3a3563c2f       kube-apiserver-default-k8s-diff-port-223394            kube-system
	7ab9a0ded4cd1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   5518c2d9e6ad9       kube-scheduler-default-k8s-diff-port-223394            kube-system
	
	
	==> coredns [15a138bca22d784e0423c73e2b06be8877805333bb99db9dc7f3360e82a2b66f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47475 - 49697 "HINFO IN 6440797528183983184.3307214152103563439. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.048406152s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-223394
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-223394
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=default-k8s-diff-port-223394
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_54_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:54:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-223394
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:55:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:55:44 +0000   Sat, 25 Oct 2025 10:54:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:55:44 +0000   Sat, 25 Oct 2025 10:54:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:55:44 +0000   Sat, 25 Oct 2025 10:54:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:55:44 +0000   Sat, 25 Oct 2025 10:55:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-223394
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                de5cd403-6ec9-42cd-9429-85d79e1a8304
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-w9r8g                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     54s
	  kube-system                 etcd-default-k8s-diff-port-223394                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         59s
	  kube-system                 kindnet-tclvn                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-default-k8s-diff-port-223394             250m (12%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-223394    200m (10%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-zpq57                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-default-k8s-diff-port-223394             100m (5%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 52s                kube-proxy       
	  Normal   Starting                 67s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 67s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x9 over 67s)  kubelet          Node default-k8s-diff-port-223394 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 67s)  kubelet          Node default-k8s-diff-port-223394 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x7 over 67s)  kubelet          Node default-k8s-diff-port-223394 status is now: NodeHasSufficientPID
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s                kubelet          Node default-k8s-diff-port-223394 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s                kubelet          Node default-k8s-diff-port-223394 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s                kubelet          Node default-k8s-diff-port-223394 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node default-k8s-diff-port-223394 event: Registered Node default-k8s-diff-port-223394 in Controller
	  Normal   NodeReady                13s                kubelet          Node default-k8s-diff-port-223394 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct25 10:31] overlayfs: idmapped layers are currently not supported
	[Oct25 10:32] overlayfs: idmapped layers are currently not supported
	[Oct25 10:33] overlayfs: idmapped layers are currently not supported
	[Oct25 10:34] overlayfs: idmapped layers are currently not supported
	[Oct25 10:36] overlayfs: idmapped layers are currently not supported
	[ +23.146409] overlayfs: idmapped layers are currently not supported
	[ +12.090113] overlayfs: idmapped layers are currently not supported
	[Oct25 10:37] overlayfs: idmapped layers are currently not supported
	[ +24.992751] overlayfs: idmapped layers are currently not supported
	[Oct25 10:38] overlayfs: idmapped layers are currently not supported
	[Oct25 10:39] overlayfs: idmapped layers are currently not supported
	[Oct25 10:40] overlayfs: idmapped layers are currently not supported
	[Oct25 10:41] overlayfs: idmapped layers are currently not supported
	[Oct25 10:43] overlayfs: idmapped layers are currently not supported
	[Oct25 10:45] overlayfs: idmapped layers are currently not supported
	[ +32.163680] overlayfs: idmapped layers are currently not supported
	[Oct25 10:47] overlayfs: idmapped layers are currently not supported
	[Oct25 10:49] overlayfs: idmapped layers are currently not supported
	[Oct25 10:50] overlayfs: idmapped layers are currently not supported
	[Oct25 10:51] overlayfs: idmapped layers are currently not supported
	[  +5.236078] overlayfs: idmapped layers are currently not supported
	[Oct25 10:52] overlayfs: idmapped layers are currently not supported
	[Oct25 10:53] overlayfs: idmapped layers are currently not supported
	[Oct25 10:54] overlayfs: idmapped layers are currently not supported
	[Oct25 10:55] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9a0732387aea17ef29f12075e0e29f0e14aa70d0c6f52acd9601c80316e6924c] <==
	{"level":"warn","ts":"2025-10-25T10:54:52.295996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:54:52.340221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:54:52.344240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:54:52.359773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:54:52.378111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:54:52.395188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:54:52.421196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:54:52.431068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:54:52.450856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:54:52.473270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:54:52.498378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:54:52.507592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:54:52.566414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:54:52.578258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:54:52.595343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:54:52.612066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:54:52.639076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:54:52.659082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:54:52.677713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:54:52.694900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:54:52.730554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:54:52.756308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:54:52.779602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:54:52.864496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47528","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T10:55:03.321672Z","caller":"traceutil/trace.go:172","msg":"trace[612464477] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"117.256005ms","start":"2025-10-25T10:55:03.204345Z","end":"2025-10-25T10:55:03.321601Z","steps":["trace[612464477] 'process raft request'  (duration: 117.041143ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:55:57 up  2:38,  0 user,  load average: 3.23, 3.40, 2.87
	Linux default-k8s-diff-port-223394 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1b83fed847fa010f068f7e1068e65c410e54f2d1929f5855e8fe13a4ace8c929] <==
	I1025 10:55:03.742547       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:55:03.742832       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:55:03.742984       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:55:03.743001       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:55:03.743016       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:55:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:55:04.030678       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:55:04.030698       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:55:04.030707       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:55:04.031008       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:55:34.031137       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 10:55:34.031236       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 10:55:34.031435       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 10:55:34.031569       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1025 10:55:35.531454       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:55:35.531494       1 metrics.go:72] Registering metrics
	I1025 10:55:35.531577       1 controller.go:711] "Syncing nftables rules"
	I1025 10:55:44.033682       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:55:44.033722       1 main.go:301] handling current node
	I1025 10:55:54.029750       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:55:54.029811       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3a2ad2bbbd42f7f325192b1dd95d91b5a4027622c446a4f25dd964f6fe83f35c] <==
	I1025 10:54:54.367139       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 10:54:54.367898       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:54:54.426829       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:54:54.438357       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 10:54:54.508773       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:54:54.551318       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:54:54.701891       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:54:54.780846       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 10:54:54.838657       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 10:54:54.838683       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:54:56.279467       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:54:56.331046       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:54:56.474106       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 10:54:56.484913       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1025 10:54:56.486338       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:54:56.495275       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:54:56.709515       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:54:57.512180       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:54:57.549367       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 10:54:57.565138       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 10:55:02.151434       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:55:02.357577       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:55:02.365328       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:55:02.749306       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1025 10:55:55.569089       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:53714: use of closed network connection
	
	
	==> kube-controller-manager [a50621761c60836d7a049ba6ea7905e65629ea19a90da1e668affbe23d1bb51e] <==
	I1025 10:55:01.797192       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:55:01.797262       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:55:01.797413       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 10:55:01.798279       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-223394" podCIDRs=["10.244.0.0/24"]
	I1025 10:55:01.801394       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 10:55:01.801468       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 10:55:01.801487       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 10:55:01.801652       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 10:55:01.801708       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-223394"
	I1025 10:55:01.801764       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 10:55:01.801794       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:55:01.806921       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 10:55:01.810074       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 10:55:01.819397       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:55:01.837655       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:55:01.845113       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 10:55:01.845277       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 10:55:01.845307       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 10:55:01.845489       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 10:55:01.845542       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 10:55:01.845686       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 10:55:01.858136       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:55:01.858165       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:55:01.858176       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:55:46.814018       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [dd822dcd224955b75138d59e0f222e53fd74af28de7a2616220e90e9f408e60f] <==
	I1025 10:55:03.969437       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:55:04.159646       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:55:04.259868       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:55:04.263293       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 10:55:04.263423       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:55:04.407474       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:55:04.407531       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:55:04.419059       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:55:04.419359       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:55:04.419375       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:55:04.430871       1 config.go:200] "Starting service config controller"
	I1025 10:55:04.430892       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:55:04.430909       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:55:04.430913       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:55:04.430921       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:55:04.430925       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:55:04.431352       1 config.go:309] "Starting node config controller"
	I1025 10:55:04.431360       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:55:04.531018       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:55:04.531059       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:55:04.531072       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 10:55:04.531732       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [7ab9a0ded4cd1cf957d8fbf55b7d4749c31e0dd68d31086baa51022c3b206087] <==
	I1025 10:54:55.873523       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:54:55.881278       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:54:55.882144       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:54:55.882220       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:54:55.882273       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1025 10:54:55.896359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 10:54:55.900983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 10:54:55.901126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 10:54:55.903121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 10:54:55.910836       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 10:54:55.925512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:54:55.925573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:54:55.925621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 10:54:55.925669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:54:55.925720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 10:54:55.925767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:54:55.925813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:54:55.925862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 10:54:55.925929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 10:54:55.925965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:54:55.926889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 10:54:55.926935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 10:54:55.926989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 10:54:55.929796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1025 10:54:57.082653       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:54:59 default-k8s-diff-port-223394 kubelet[1296]: I1025 10:54:59.144046    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-223394" podStartSLOduration=1.144027329 podStartE2EDuration="1.144027329s" podCreationTimestamp="2025-10-25 10:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:54:59.143767897 +0000 UTC m=+1.711784036" watchObservedRunningTime="2025-10-25 10:54:59.144027329 +0000 UTC m=+1.712043460"
	Oct 25 10:54:59 default-k8s-diff-port-223394 kubelet[1296]: I1025 10:54:59.144229    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-223394" podStartSLOduration=1.144222129 podStartE2EDuration="1.144222129s" podCreationTimestamp="2025-10-25 10:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:54:59.090612572 +0000 UTC m=+1.658628728" watchObservedRunningTime="2025-10-25 10:54:59.144222129 +0000 UTC m=+1.712238252"
	Oct 25 10:54:59 default-k8s-diff-port-223394 kubelet[1296]: I1025 10:54:59.170062    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-223394" podStartSLOduration=1.170042402 podStartE2EDuration="1.170042402s" podCreationTimestamp="2025-10-25 10:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:54:59.169670862 +0000 UTC m=+1.737686993" watchObservedRunningTime="2025-10-25 10:54:59.170042402 +0000 UTC m=+1.738058549"
	Oct 25 10:55:01 default-k8s-diff-port-223394 kubelet[1296]: I1025 10:55:01.788879    1296 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 25 10:55:01 default-k8s-diff-port-223394 kubelet[1296]: I1025 10:55:01.793687    1296 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 25 10:55:03 default-k8s-diff-port-223394 kubelet[1296]: I1025 10:55:03.030890    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1cfc858-b152-45a9-be90-74dff3a44e56-xtables-lock\") pod \"kindnet-tclvn\" (UID: \"d1cfc858-b152-45a9-be90-74dff3a44e56\") " pod="kube-system/kindnet-tclvn"
	Oct 25 10:55:03 default-k8s-diff-port-223394 kubelet[1296]: I1025 10:55:03.031373    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d1cfc858-b152-45a9-be90-74dff3a44e56-cni-cfg\") pod \"kindnet-tclvn\" (UID: \"d1cfc858-b152-45a9-be90-74dff3a44e56\") " pod="kube-system/kindnet-tclvn"
	Oct 25 10:55:03 default-k8s-diff-port-223394 kubelet[1296]: I1025 10:55:03.031464    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1cfc858-b152-45a9-be90-74dff3a44e56-lib-modules\") pod \"kindnet-tclvn\" (UID: \"d1cfc858-b152-45a9-be90-74dff3a44e56\") " pod="kube-system/kindnet-tclvn"
	Oct 25 10:55:03 default-k8s-diff-port-223394 kubelet[1296]: I1025 10:55:03.031567    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2rbc\" (UniqueName: \"kubernetes.io/projected/d1cfc858-b152-45a9-be90-74dff3a44e56-kube-api-access-v2rbc\") pod \"kindnet-tclvn\" (UID: \"d1cfc858-b152-45a9-be90-74dff3a44e56\") " pod="kube-system/kindnet-tclvn"
	Oct 25 10:55:03 default-k8s-diff-port-223394 kubelet[1296]: I1025 10:55:03.133082    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec42d09b-7c2e-41f3-a944-c0551a0a9c52-xtables-lock\") pod \"kube-proxy-zpq57\" (UID: \"ec42d09b-7c2e-41f3-a944-c0551a0a9c52\") " pod="kube-system/kube-proxy-zpq57"
	Oct 25 10:55:03 default-k8s-diff-port-223394 kubelet[1296]: I1025 10:55:03.133132    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ec42d09b-7c2e-41f3-a944-c0551a0a9c52-kube-proxy\") pod \"kube-proxy-zpq57\" (UID: \"ec42d09b-7c2e-41f3-a944-c0551a0a9c52\") " pod="kube-system/kube-proxy-zpq57"
	Oct 25 10:55:03 default-k8s-diff-port-223394 kubelet[1296]: I1025 10:55:03.133155    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec42d09b-7c2e-41f3-a944-c0551a0a9c52-lib-modules\") pod \"kube-proxy-zpq57\" (UID: \"ec42d09b-7c2e-41f3-a944-c0551a0a9c52\") " pod="kube-system/kube-proxy-zpq57"
	Oct 25 10:55:03 default-k8s-diff-port-223394 kubelet[1296]: I1025 10:55:03.133174    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2q9v\" (UniqueName: \"kubernetes.io/projected/ec42d09b-7c2e-41f3-a944-c0551a0a9c52-kube-api-access-v2q9v\") pod \"kube-proxy-zpq57\" (UID: \"ec42d09b-7c2e-41f3-a944-c0551a0a9c52\") " pod="kube-system/kube-proxy-zpq57"
	Oct 25 10:55:03 default-k8s-diff-port-223394 kubelet[1296]: I1025 10:55:03.366808    1296 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 25 10:55:03 default-k8s-diff-port-223394 kubelet[1296]: W1025 10:55:03.648136    1296 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fdfe0713435ec0c9d963f0d524447bf680a521a643dff88e212a946e59a268a7/crio-2e4c017737db8feecf1bf8242ba4e8758818b140c7866714725f85c4f43e1987 WatchSource:0}: Error finding container 2e4c017737db8feecf1bf8242ba4e8758818b140c7866714725f85c4f43e1987: Status 404 returned error can't find the container with id 2e4c017737db8feecf1bf8242ba4e8758818b140c7866714725f85c4f43e1987
	Oct 25 10:55:04 default-k8s-diff-port-223394 kubelet[1296]: I1025 10:55:04.096675    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zpq57" podStartSLOduration=2.096653983 podStartE2EDuration="2.096653983s" podCreationTimestamp="2025-10-25 10:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:55:04.064367953 +0000 UTC m=+6.632384084" watchObservedRunningTime="2025-10-25 10:55:04.096653983 +0000 UTC m=+6.664670114"
	Oct 25 10:55:06 default-k8s-diff-port-223394 kubelet[1296]: I1025 10:55:06.708046    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-tclvn" podStartSLOduration=4.708026798 podStartE2EDuration="4.708026798s" podCreationTimestamp="2025-10-25 10:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:55:04.100844922 +0000 UTC m=+6.668861061" watchObservedRunningTime="2025-10-25 10:55:06.708026798 +0000 UTC m=+9.276042921"
	Oct 25 10:55:44 default-k8s-diff-port-223394 kubelet[1296]: I1025 10:55:44.431145    1296 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 10:55:44 default-k8s-diff-port-223394 kubelet[1296]: I1025 10:55:44.521908    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9psb5\" (UniqueName: \"kubernetes.io/projected/83c72429-725c-4c35-bb11-105ba8c376f7-kube-api-access-9psb5\") pod \"coredns-66bc5c9577-w9r8g\" (UID: \"83c72429-725c-4c35-bb11-105ba8c376f7\") " pod="kube-system/coredns-66bc5c9577-w9r8g"
	Oct 25 10:55:44 default-k8s-diff-port-223394 kubelet[1296]: I1025 10:55:44.522195    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83c72429-725c-4c35-bb11-105ba8c376f7-config-volume\") pod \"coredns-66bc5c9577-w9r8g\" (UID: \"83c72429-725c-4c35-bb11-105ba8c376f7\") " pod="kube-system/coredns-66bc5c9577-w9r8g"
	Oct 25 10:55:44 default-k8s-diff-port-223394 kubelet[1296]: I1025 10:55:44.522232    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8eadd837-3c03-4cd3-97cb-5f7664d9620a-tmp\") pod \"storage-provisioner\" (UID: \"8eadd837-3c03-4cd3-97cb-5f7664d9620a\") " pod="kube-system/storage-provisioner"
	Oct 25 10:55:44 default-k8s-diff-port-223394 kubelet[1296]: I1025 10:55:44.522254    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcxft\" (UniqueName: \"kubernetes.io/projected/8eadd837-3c03-4cd3-97cb-5f7664d9620a-kube-api-access-kcxft\") pod \"storage-provisioner\" (UID: \"8eadd837-3c03-4cd3-97cb-5f7664d9620a\") " pod="kube-system/storage-provisioner"
	Oct 25 10:55:45 default-k8s-diff-port-223394 kubelet[1296]: I1025 10:55:45.255076    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.255032729 podStartE2EDuration="41.255032729s" podCreationTimestamp="2025-10-25 10:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:55:45.233298574 +0000 UTC m=+47.801314697" watchObservedRunningTime="2025-10-25 10:55:45.255032729 +0000 UTC m=+47.823048851"
	Oct 25 10:55:47 default-k8s-diff-port-223394 kubelet[1296]: I1025 10:55:47.421092    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-w9r8g" podStartSLOduration=44.42107212 podStartE2EDuration="44.42107212s" podCreationTimestamp="2025-10-25 10:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:55:45.257696269 +0000 UTC m=+47.825712408" watchObservedRunningTime="2025-10-25 10:55:47.42107212 +0000 UTC m=+49.989088251"
	Oct 25 10:55:47 default-k8s-diff-port-223394 kubelet[1296]: I1025 10:55:47.444632    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-568wh\" (UniqueName: \"kubernetes.io/projected/9fe5f74f-d071-4f2d-8540-22336c347abd-kube-api-access-568wh\") pod \"busybox\" (UID: \"9fe5f74f-d071-4f2d-8540-22336c347abd\") " pod="default/busybox"
	
	
	==> storage-provisioner [8ed23fa29a7be64327b0faa50e3515a901bfddbcedb04209236438abeeacb1e1] <==
	I1025 10:55:44.951151       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:55:44.976165       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:55:44.977094       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 10:55:44.980365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:55:45.017694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:55:45.018156       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:55:45.020993       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"349ccd11-8226-4feb-9ee3-b35b622cb7d9", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-223394_d3eb15b0-d757-41b7-b486-087318767e13 became leader
	I1025 10:55:45.022170       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-223394_d3eb15b0-d757-41b7-b486-087318767e13!
	W1025 10:55:45.066863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:55:45.080968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:55:45.129661       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-223394_d3eb15b0-d757-41b7-b486-087318767e13!
	W1025 10:55:47.083970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:55:47.091434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:55:49.095035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:55:49.101048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:55:51.104632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:55:51.111887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:55:53.115090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:55:53.119549       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:55:55.124211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:55:55.134424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:55:57.139186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:55:57.157676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-223394 -n default-k8s-diff-port-223394
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-223394 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-348342 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-348342 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (378.670569ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:56:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-348342 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-348342 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-348342 describe deploy/metrics-server -n kube-system: exit status 1 (127.681618ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-348342 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-348342
helpers_test.go:243: (dbg) docker inspect embed-certs-348342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4",
	        "Created": "2025-10-25T10:55:14.663333918Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 448876,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:55:14.730089309Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4/hostname",
	        "HostsPath": "/var/lib/docker/containers/f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4/hosts",
	        "LogPath": "/var/lib/docker/containers/f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4/f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4-json.log",
	        "Name": "/embed-certs-348342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-348342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-348342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4",
	                "LowerDir": "/var/lib/docker/overlay2/7af8a0a0e4548ff306a21c56011c7ef1e62940e78c923925b89499c6f933074a-init/diff:/var/lib/docker/overlay2/a746402fafa86965bd9394d930bb96ec16982037f9f5e7f4f1de6d09576ff851/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7af8a0a0e4548ff306a21c56011c7ef1e62940e78c923925b89499c6f933074a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7af8a0a0e4548ff306a21c56011c7ef1e62940e78c923925b89499c6f933074a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7af8a0a0e4548ff306a21c56011c7ef1e62940e78c923925b89499c6f933074a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-348342",
	                "Source": "/var/lib/docker/volumes/embed-certs-348342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-348342",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-348342",
	                "name.minikube.sigs.k8s.io": "embed-certs-348342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "24d7f0d488744173e78777a48d8fe43f9bd2e753648df7dc739c7c85488c6a92",
	            "SandboxKey": "/var/run/docker/netns/24d7f0d48874",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-348342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:7a:a8:01:ef:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9165ff42962d9a3f99eefc8873610a74534a4c5300b06a1e9249fa26eacccff4",
	                    "EndpointID": "a34606e13a01f38f4478bc99cd3300db31f3e4953ddaea4253b35dce71d9126c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-348342",
	                        "f2631e70db67"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-348342 -n embed-certs-348342
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-348342 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-348342 logs -n 25: (1.826469524s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p kubernetes-upgrade-291330                                                                                                                                                                                                                  │ kubernetes-upgrade-291330    │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:50 UTC │
	│ start   │ -p cert-expiration-736062 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-736062       │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:51 UTC │
	│ delete  │ -p force-systemd-env-623432                                                                                                                                                                                                                   │ force-systemd-env-623432     │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:50 UTC │
	│ start   │ -p cert-options-771620 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-771620          │ jenkins │ v1.37.0 │ 25 Oct 25 10:50 UTC │ 25 Oct 25 10:51 UTC │
	│ ssh     │ cert-options-771620 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-771620          │ jenkins │ v1.37.0 │ 25 Oct 25 10:51 UTC │ 25 Oct 25 10:51 UTC │
	│ ssh     │ -p cert-options-771620 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-771620          │ jenkins │ v1.37.0 │ 25 Oct 25 10:51 UTC │ 25 Oct 25 10:51 UTC │
	│ delete  │ -p cert-options-771620                                                                                                                                                                                                                        │ cert-options-771620          │ jenkins │ v1.37.0 │ 25 Oct 25 10:51 UTC │ 25 Oct 25 10:51 UTC │
	│ start   │ -p old-k8s-version-031983 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:51 UTC │ 25 Oct 25 10:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-031983 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:52 UTC │                     │
	│ stop    │ -p old-k8s-version-031983 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:53 UTC │ 25 Oct 25 10:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-031983 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:53 UTC │ 25 Oct 25 10:53 UTC │
	│ start   │ -p old-k8s-version-031983 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:53 UTC │ 25 Oct 25 10:54 UTC │
	│ image   │ old-k8s-version-031983 image list --format=json                                                                                                                                                                                               │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:54 UTC │
	│ pause   │ -p old-k8s-version-031983 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │                     │
	│ delete  │ -p old-k8s-version-031983                                                                                                                                                                                                                     │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:54 UTC │
	│ delete  │ -p old-k8s-version-031983                                                                                                                                                                                                                     │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:54 UTC │
	│ start   │ -p default-k8s-diff-port-223394 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:55 UTC │
	│ start   │ -p cert-expiration-736062 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-736062       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:55 UTC │
	│ delete  │ -p cert-expiration-736062                                                                                                                                                                                                                     │ cert-expiration-736062       │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │ 25 Oct 25 10:55 UTC │
	│ start   │ -p embed-certs-348342 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │ 25 Oct 25 10:56 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-223394 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-223394 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │ 25 Oct 25 10:56 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-223394 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:56 UTC │
	│ start   │ -p default-k8s-diff-port-223394 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-348342 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:56:10
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:56:10.372397  451806 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:56:10.372613  451806 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:56:10.372641  451806 out.go:374] Setting ErrFile to fd 2...
	I1025 10:56:10.372659  451806 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:56:10.372934  451806 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:56:10.373331  451806 out.go:368] Setting JSON to false
	I1025 10:56:10.374349  451806 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9522,"bootTime":1761380249,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 10:56:10.374460  451806 start.go:141] virtualization:  
	I1025 10:56:10.378006  451806 out.go:179] * [default-k8s-diff-port-223394] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:56:10.381951  451806 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:56:10.382115  451806 notify.go:220] Checking for updates...
	I1025 10:56:10.388161  451806 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:56:10.391183  451806 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:56:10.394108  451806 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 10:56:10.397056  451806 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:56:10.399979  451806 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:56:10.403398  451806 config.go:182] Loaded profile config "default-k8s-diff-port-223394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:56:10.404040  451806 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:56:10.442022  451806 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:56:10.442178  451806 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:56:10.503559  451806 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:56:10.494283082 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:56:10.503668  451806 docker.go:318] overlay module found
	I1025 10:56:10.508694  451806 out.go:179] * Using the docker driver based on existing profile
	I1025 10:56:10.511626  451806 start.go:305] selected driver: docker
	I1025 10:56:10.511651  451806 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-223394 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-223394 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:56:10.511789  451806 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:56:10.512534  451806 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:56:10.571047  451806 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:56:10.561786013 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:56:10.571442  451806 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:56:10.571463  451806 cni.go:84] Creating CNI manager for ""
	I1025 10:56:10.571517  451806 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:56:10.571550  451806 start.go:349] cluster config:
	{Name:default-k8s-diff-port-223394 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-223394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:56:10.576516  451806 out.go:179] * Starting "default-k8s-diff-port-223394" primary control-plane node in "default-k8s-diff-port-223394" cluster
	I1025 10:56:10.579307  451806 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:56:10.582248  451806 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:56:10.585086  451806 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:56:10.585141  451806 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 10:56:10.585155  451806 cache.go:58] Caching tarball of preloaded images
	I1025 10:56:10.585189  451806 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:56:10.585253  451806 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:56:10.585264  451806 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:56:10.585383  451806 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/default-k8s-diff-port-223394/config.json ...
	I1025 10:56:10.604388  451806 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:56:10.604411  451806 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:56:10.604425  451806 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:56:10.604456  451806 start.go:360] acquireMachinesLock for default-k8s-diff-port-223394: {Name:mkab4dadc9d50dccb5803fca940387ed75e72301 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:56:10.604516  451806 start.go:364] duration metric: took 37.752µs to acquireMachinesLock for "default-k8s-diff-port-223394"
	I1025 10:56:10.604539  451806 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:56:10.604544  451806 fix.go:54] fixHost starting: 
	I1025 10:56:10.604799  451806 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-223394 --format={{.State.Status}}
	I1025 10:56:10.622936  451806 fix.go:112] recreateIfNeeded on default-k8s-diff-port-223394: state=Stopped err=<nil>
	W1025 10:56:10.622969  451806 fix.go:138] unexpected machine state, will restart: <nil>
	W1025 10:56:09.702316  448482 node_ready.go:57] node "embed-certs-348342" has "Ready":"False" status (will retry)
	W1025 10:56:12.202632  448482 node_ready.go:57] node "embed-certs-348342" has "Ready":"False" status (will retry)
	I1025 10:56:10.626267  451806 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-223394" ...
	I1025 10:56:10.626356  451806 cli_runner.go:164] Run: docker start default-k8s-diff-port-223394
	I1025 10:56:10.889614  451806 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-223394 --format={{.State.Status}}
	I1025 10:56:10.912894  451806 kic.go:430] container "default-k8s-diff-port-223394" state is running.
	I1025 10:56:10.913826  451806 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-223394
	I1025 10:56:10.940796  451806 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/default-k8s-diff-port-223394/config.json ...
	I1025 10:56:10.941212  451806 machine.go:93] provisionDockerMachine start ...
	I1025 10:56:10.941289  451806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-223394
	I1025 10:56:10.963933  451806 main.go:141] libmachine: Using SSH client type: native
	I1025 10:56:10.964253  451806 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1025 10:56:10.964263  451806 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:56:10.965146  451806 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 10:56:14.122007  451806 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-223394
	
	I1025 10:56:14.122033  451806 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-223394"
	I1025 10:56:14.122097  451806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-223394
	I1025 10:56:14.148398  451806 main.go:141] libmachine: Using SSH client type: native
	I1025 10:56:14.148714  451806 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1025 10:56:14.148725  451806 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-223394 && echo "default-k8s-diff-port-223394" | sudo tee /etc/hostname
	I1025 10:56:14.312377  451806 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-223394
	
	I1025 10:56:14.312473  451806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-223394
	I1025 10:56:14.331925  451806 main.go:141] libmachine: Using SSH client type: native
	I1025 10:56:14.332238  451806 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1025 10:56:14.332264  451806 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-223394' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-223394/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-223394' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:56:14.483090  451806 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:56:14.483161  451806 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:56:14.483207  451806 ubuntu.go:190] setting up certificates
	I1025 10:56:14.483218  451806 provision.go:84] configureAuth start
	I1025 10:56:14.483288  451806 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-223394
	I1025 10:56:14.504366  451806 provision.go:143] copyHostCerts
	I1025 10:56:14.504443  451806 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:56:14.504466  451806 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:56:14.504545  451806 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:56:14.504662  451806 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:56:14.504677  451806 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:56:14.504706  451806 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:56:14.504776  451806 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:56:14.504785  451806 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:56:14.504812  451806 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:56:14.504872  451806 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-223394 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-223394 localhost minikube]
	I1025 10:56:14.821181  451806 provision.go:177] copyRemoteCerts
	I1025 10:56:14.821247  451806 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:56:14.821294  451806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-223394
	I1025 10:56:14.839613  451806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/default-k8s-diff-port-223394/id_rsa Username:docker}
	I1025 10:56:14.945919  451806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:56:14.964675  451806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1025 10:56:14.983564  451806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:56:15.019755  451806 provision.go:87] duration metric: took 536.497027ms to configureAuth
	I1025 10:56:15.019822  451806 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:56:15.020053  451806 config.go:182] Loaded profile config "default-k8s-diff-port-223394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:56:15.020183  451806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-223394
	I1025 10:56:15.049423  451806 main.go:141] libmachine: Using SSH client type: native
	I1025 10:56:15.049782  451806 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1025 10:56:15.049808  451806 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:56:15.394847  451806 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:56:15.394890  451806 machine.go:96] duration metric: took 4.453664159s to provisionDockerMachine
	I1025 10:56:15.394901  451806 start.go:293] postStartSetup for "default-k8s-diff-port-223394" (driver="docker")
	I1025 10:56:15.394912  451806 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:56:15.394978  451806 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:56:15.395039  451806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-223394
	I1025 10:56:15.418458  451806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/default-k8s-diff-port-223394/id_rsa Username:docker}
	I1025 10:56:15.526248  451806 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:56:15.529826  451806 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:56:15.529857  451806 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:56:15.529868  451806 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:56:15.529922  451806 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:56:15.530039  451806 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:56:15.530151  451806 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:56:15.538096  451806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:56:15.557295  451806 start.go:296] duration metric: took 162.377746ms for postStartSetup
	I1025 10:56:15.557407  451806 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:56:15.557457  451806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-223394
	I1025 10:56:15.575265  451806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/default-k8s-diff-port-223394/id_rsa Username:docker}
	I1025 10:56:15.679347  451806 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:56:15.684245  451806 fix.go:56] duration metric: took 5.079691526s for fixHost
	I1025 10:56:15.684283  451806 start.go:83] releasing machines lock for "default-k8s-diff-port-223394", held for 5.079754583s
	I1025 10:56:15.684378  451806 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-223394
	I1025 10:56:15.707080  451806 ssh_runner.go:195] Run: cat /version.json
	I1025 10:56:15.707095  451806 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:56:15.707144  451806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-223394
	I1025 10:56:15.707168  451806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-223394
	I1025 10:56:15.730240  451806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/default-k8s-diff-port-223394/id_rsa Username:docker}
	I1025 10:56:15.733030  451806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/default-k8s-diff-port-223394/id_rsa Username:docker}
	I1025 10:56:15.925324  451806 ssh_runner.go:195] Run: systemctl --version
	I1025 10:56:15.932013  451806 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:56:15.972893  451806 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:56:15.977762  451806 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:56:15.977843  451806 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:56:15.986065  451806 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:56:15.986093  451806 start.go:495] detecting cgroup driver to use...
	I1025 10:56:15.986157  451806 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:56:15.986222  451806 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:56:16.007903  451806 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:56:16.022929  451806 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:56:16.023048  451806 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:56:16.040615  451806 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:56:16.055321  451806 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:56:16.189834  451806 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:56:16.315036  451806 docker.go:234] disabling docker service ...
	I1025 10:56:16.315111  451806 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:56:16.332127  451806 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:56:16.346693  451806 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:56:16.478511  451806 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:56:16.616128  451806 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:56:16.629130  451806 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:56:16.644550  451806 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:56:16.644703  451806 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:56:16.654719  451806 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:56:16.654855  451806 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:56:16.664501  451806 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:56:16.673593  451806 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:56:16.683078  451806 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:56:16.692126  451806 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:56:16.703252  451806 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:56:16.712635  451806 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:56:16.722664  451806 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:56:16.730929  451806 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:56:16.738676  451806 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:56:16.864859  451806 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:56:17.013231  451806 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:56:17.013319  451806 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:56:17.017712  451806 start.go:563] Will wait 60s for crictl version
	I1025 10:56:17.017776  451806 ssh_runner.go:195] Run: which crictl
	I1025 10:56:17.021651  451806 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:56:17.049107  451806 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:56:17.049211  451806 ssh_runner.go:195] Run: crio --version
	I1025 10:56:17.080398  451806 ssh_runner.go:195] Run: crio --version
	I1025 10:56:17.115188  451806 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:56:17.118173  451806 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-223394 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:56:17.142436  451806 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 10:56:17.146676  451806 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:56:17.157563  451806 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-223394 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-223394 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:56:17.157724  451806 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:56:17.157800  451806 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:56:17.192314  451806 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:56:17.192391  451806 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:56:17.192486  451806 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:56:17.223522  451806 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:56:17.223545  451806 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:56:17.223552  451806 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1025 10:56:17.223646  451806 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-223394 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-223394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:56:17.223730  451806 ssh_runner.go:195] Run: crio config
	I1025 10:56:17.305063  451806 cni.go:84] Creating CNI manager for ""
	I1025 10:56:17.305089  451806 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:56:17.305136  451806 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:56:17.305169  451806 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-223394 NodeName:default-k8s-diff-port-223394 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:56:17.305350  451806 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-223394"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:56:17.305465  451806 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:56:17.313493  451806 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:56:17.313577  451806 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:56:17.321314  451806 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1025 10:56:17.334572  451806 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:56:17.347692  451806 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1025 10:56:17.361738  451806 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:56:17.365415  451806 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:56:17.375129  451806 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:56:17.521750  451806 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:56:17.540138  451806 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/default-k8s-diff-port-223394 for IP: 192.168.85.2
	I1025 10:56:17.540158  451806 certs.go:195] generating shared ca certs ...
	I1025 10:56:17.540174  451806 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:56:17.540312  451806 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:56:17.540359  451806 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:56:17.540370  451806 certs.go:257] generating profile certs ...
	I1025 10:56:17.540451  451806 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/default-k8s-diff-port-223394/client.key
	I1025 10:56:17.540519  451806 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/default-k8s-diff-port-223394/apiserver.key.f2319017
	I1025 10:56:17.540573  451806 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/default-k8s-diff-port-223394/proxy-client.key
	I1025 10:56:17.540684  451806 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:56:17.540724  451806 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:56:17.540738  451806 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:56:17.540762  451806 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:56:17.540788  451806 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:56:17.540812  451806 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:56:17.540861  451806 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:56:17.541448  451806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:56:17.561962  451806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:56:17.583044  451806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:56:17.602296  451806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:56:17.621614  451806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/default-k8s-diff-port-223394/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 10:56:17.648725  451806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/default-k8s-diff-port-223394/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:56:17.671232  451806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/default-k8s-diff-port-223394/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:56:17.693437  451806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/default-k8s-diff-port-223394/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:56:17.727898  451806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:56:17.751695  451806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:56:17.776307  451806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:56:17.798313  451806 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:56:17.812968  451806 ssh_runner.go:195] Run: openssl version
	I1025 10:56:17.820077  451806 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:56:17.829433  451806 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:56:17.833226  451806 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:56:17.833299  451806 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:56:17.876434  451806 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:56:17.885186  451806 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:56:17.894482  451806 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:56:17.898386  451806 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:56:17.898458  451806 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:56:17.942577  451806 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:56:17.951686  451806 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:56:17.960769  451806 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:56:17.964995  451806 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:56:17.965103  451806 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:56:18.007716  451806 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:56:18.018839  451806 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:56:18.023379  451806 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:56:18.066426  451806 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:56:18.108338  451806 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:56:18.154786  451806 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:56:18.198322  451806 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:56:18.246755  451806 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:56:18.305002  451806 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-223394 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-223394 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:56:18.305088  451806 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:56:18.305161  451806 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:56:18.370309  451806 cri.go:89] found id: "9779636f70f0c278cba11f390d72d18ecf6492c685c187c54ca454f436e08653"
	I1025 10:56:18.370371  451806 cri.go:89] found id: ""
	I1025 10:56:18.370476  451806 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:56:18.387942  451806 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:56:18Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:56:18.388079  451806 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:56:18.400018  451806 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:56:18.400082  451806 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:56:18.400167  451806 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:56:18.425506  451806 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:56:18.426530  451806 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-223394" does not appear in /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:56:18.427133  451806 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-259409/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-223394" cluster setting kubeconfig missing "default-k8s-diff-port-223394" context setting]
	I1025 10:56:18.427934  451806 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:56:18.429694  451806 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:56:18.448711  451806 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1025 10:56:18.448786  451806 kubeadm.go:601] duration metric: took 48.681853ms to restartPrimaryControlPlane
	I1025 10:56:18.448810  451806 kubeadm.go:402] duration metric: took 143.816305ms to StartCluster
	I1025 10:56:18.448854  451806 settings.go:142] acquiring lock: {Name:mk78071aea90144fb2e8f63b90de4792d7a3522a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:56:18.448934  451806 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:56:18.450541  451806 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:56:18.450837  451806 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:56:18.451152  451806 config.go:182] Loaded profile config "default-k8s-diff-port-223394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:56:18.451220  451806 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:56:18.451351  451806 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-223394"
	I1025 10:56:18.451393  451806 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-223394"
	W1025 10:56:18.451420  451806 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:56:18.451464  451806 host.go:66] Checking if "default-k8s-diff-port-223394" exists ...
	I1025 10:56:18.451991  451806 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-223394 --format={{.State.Status}}
	I1025 10:56:18.452192  451806 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-223394"
	I1025 10:56:18.452231  451806 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-223394"
	W1025 10:56:18.452252  451806 addons.go:247] addon dashboard should already be in state true
	I1025 10:56:18.452307  451806 host.go:66] Checking if "default-k8s-diff-port-223394" exists ...
	I1025 10:56:18.452529  451806 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-223394"
	I1025 10:56:18.452558  451806 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-223394"
	I1025 10:56:18.452792  451806 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-223394 --format={{.State.Status}}
	I1025 10:56:18.452867  451806 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-223394 --format={{.State.Status}}
	I1025 10:56:18.455054  451806 out.go:179] * Verifying Kubernetes components...
	I1025 10:56:18.458737  451806 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:56:18.506518  451806 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:56:18.509695  451806 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:56:18.509740  451806 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:56:18.509755  451806 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:56:18.509826  451806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-223394
	I1025 10:56:18.522151  451806 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1025 10:56:14.203585  448482 node_ready.go:57] node "embed-certs-348342" has "Ready":"False" status (will retry)
	W1025 10:56:16.702600  448482 node_ready.go:57] node "embed-certs-348342" has "Ready":"False" status (will retry)
	W1025 10:56:18.703226  448482 node_ready.go:57] node "embed-certs-348342" has "Ready":"False" status (will retry)
	I1025 10:56:18.525582  451806 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-223394"
	W1025 10:56:18.525610  451806 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:56:18.525636  451806 host.go:66] Checking if "default-k8s-diff-port-223394" exists ...
	I1025 10:56:18.526150  451806 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:56:18.526168  451806 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:56:18.526225  451806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-223394
	I1025 10:56:18.526473  451806 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-223394 --format={{.State.Status}}
	I1025 10:56:18.547054  451806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/default-k8s-diff-port-223394/id_rsa Username:docker}
	I1025 10:56:18.580610  451806 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:56:18.580632  451806 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:56:18.580713  451806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-223394
	I1025 10:56:18.588670  451806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/default-k8s-diff-port-223394/id_rsa Username:docker}
	I1025 10:56:18.615571  451806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/default-k8s-diff-port-223394/id_rsa Username:docker}
	I1025 10:56:18.821496  451806 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:56:18.821518  451806 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:56:18.823614  451806 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:56:18.900597  451806 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:56:18.900674  451806 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:56:18.903278  451806 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:56:18.912965  451806 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:56:18.976685  451806 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:56:18.976730  451806 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:56:19.083433  451806 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:56:19.083503  451806 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:56:19.123586  451806 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:56:19.123662  451806 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:56:19.148165  451806 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:56:19.148244  451806 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:56:19.171099  451806 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:56:19.171176  451806 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:56:19.195692  451806 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:56:19.195769  451806 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:56:19.223665  451806 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:56:19.223741  451806 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:56:19.246453  451806 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1025 10:56:21.202749  448482 node_ready.go:57] node "embed-certs-348342" has "Ready":"False" status (will retry)
	I1025 10:56:23.204134  448482 node_ready.go:49] node "embed-certs-348342" is "Ready"
	I1025 10:56:23.204162  448482 node_ready.go:38] duration metric: took 40.504548115s for node "embed-certs-348342" to be "Ready" ...
	I1025 10:56:23.204178  448482 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:56:23.204233  448482 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:56:23.236101  448482 api_server.go:72] duration metric: took 41.792983408s to wait for apiserver process to appear ...
	I1025 10:56:23.236125  448482 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:56:23.236145  448482 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:56:23.247835  448482 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1025 10:56:23.248962  448482 api_server.go:141] control plane version: v1.34.1
	I1025 10:56:23.248983  448482 api_server.go:131] duration metric: took 12.851177ms to wait for apiserver health ...
	I1025 10:56:23.248993  448482 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:56:23.258683  448482 system_pods.go:59] 8 kube-system pods found
	I1025 10:56:23.258717  448482 system_pods.go:61] "coredns-66bc5c9577-sqrrf" [15846173-f49c-4d50-af52-3b1b371fde43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:56:23.258724  448482 system_pods.go:61] "etcd-embed-certs-348342" [65a59ffa-4cba-4290-8c46-07e62bcf564b] Running
	I1025 10:56:23.258731  448482 system_pods.go:61] "kindnet-q5mzm" [4caa08ee-f6f3-442c-ad08-2be933f2869f] Running
	I1025 10:56:23.258735  448482 system_pods.go:61] "kube-apiserver-embed-certs-348342" [b67dbed8-5ebd-4a9f-804c-ed82033d0e19] Running
	I1025 10:56:23.258741  448482 system_pods.go:61] "kube-controller-manager-embed-certs-348342" [9ce2257d-b332-492a-8553-f7736a99b5db] Running
	I1025 10:56:23.258745  448482 system_pods.go:61] "kube-proxy-j9ngr" [946e15f1-043f-4f6e-a995-79bb16033e3d] Running
	I1025 10:56:23.258750  448482 system_pods.go:61] "kube-scheduler-embed-certs-348342" [4e9a5441-6278-4eca-82d2-606bda24b02d] Running
	I1025 10:56:23.258756  448482 system_pods.go:61] "storage-provisioner" [9a91278c-945c-48bc-be8b-39e026d485b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:56:23.258762  448482 system_pods.go:74] duration metric: took 9.763757ms to wait for pod list to return data ...
	I1025 10:56:23.258769  448482 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:56:23.271988  448482 default_sa.go:45] found service account: "default"
	I1025 10:56:23.272062  448482 default_sa.go:55] duration metric: took 13.285544ms for default service account to be created ...
	I1025 10:56:23.272087  448482 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:56:23.353978  448482 system_pods.go:86] 8 kube-system pods found
	I1025 10:56:23.354112  448482 system_pods.go:89] "coredns-66bc5c9577-sqrrf" [15846173-f49c-4d50-af52-3b1b371fde43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:56:23.354135  448482 system_pods.go:89] "etcd-embed-certs-348342" [65a59ffa-4cba-4290-8c46-07e62bcf564b] Running
	I1025 10:56:23.354162  448482 system_pods.go:89] "kindnet-q5mzm" [4caa08ee-f6f3-442c-ad08-2be933f2869f] Running
	I1025 10:56:23.354187  448482 system_pods.go:89] "kube-apiserver-embed-certs-348342" [b67dbed8-5ebd-4a9f-804c-ed82033d0e19] Running
	I1025 10:56:23.354215  448482 system_pods.go:89] "kube-controller-manager-embed-certs-348342" [9ce2257d-b332-492a-8553-f7736a99b5db] Running
	I1025 10:56:23.354246  448482 system_pods.go:89] "kube-proxy-j9ngr" [946e15f1-043f-4f6e-a995-79bb16033e3d] Running
	I1025 10:56:23.354271  448482 system_pods.go:89] "kube-scheduler-embed-certs-348342" [4e9a5441-6278-4eca-82d2-606bda24b02d] Running
	I1025 10:56:23.354297  448482 system_pods.go:89] "storage-provisioner" [9a91278c-945c-48bc-be8b-39e026d485b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:56:23.354354  448482 retry.go:31] will retry after 252.849155ms: missing components: kube-dns
	I1025 10:56:23.610911  448482 system_pods.go:86] 8 kube-system pods found
	I1025 10:56:23.611004  448482 system_pods.go:89] "coredns-66bc5c9577-sqrrf" [15846173-f49c-4d50-af52-3b1b371fde43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:56:23.611027  448482 system_pods.go:89] "etcd-embed-certs-348342" [65a59ffa-4cba-4290-8c46-07e62bcf564b] Running
	I1025 10:56:23.611056  448482 system_pods.go:89] "kindnet-q5mzm" [4caa08ee-f6f3-442c-ad08-2be933f2869f] Running
	I1025 10:56:23.611081  448482 system_pods.go:89] "kube-apiserver-embed-certs-348342" [b67dbed8-5ebd-4a9f-804c-ed82033d0e19] Running
	I1025 10:56:23.611107  448482 system_pods.go:89] "kube-controller-manager-embed-certs-348342" [9ce2257d-b332-492a-8553-f7736a99b5db] Running
	I1025 10:56:23.611138  448482 system_pods.go:89] "kube-proxy-j9ngr" [946e15f1-043f-4f6e-a995-79bb16033e3d] Running
	I1025 10:56:23.611158  448482 system_pods.go:89] "kube-scheduler-embed-certs-348342" [4e9a5441-6278-4eca-82d2-606bda24b02d] Running
	I1025 10:56:23.611178  448482 system_pods.go:89] "storage-provisioner" [9a91278c-945c-48bc-be8b-39e026d485b4] Running
	I1025 10:56:23.611204  448482 system_pods.go:126] duration metric: took 339.096707ms to wait for k8s-apps to be running ...
	I1025 10:56:23.611234  448482 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:56:23.611309  448482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:56:23.633066  448482 system_svc.go:56] duration metric: took 21.822368ms WaitForService to wait for kubelet
	I1025 10:56:23.633137  448482 kubeadm.go:586] duration metric: took 42.190022779s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:56:23.633171  448482 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:56:23.638385  448482 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:56:23.638486  448482 node_conditions.go:123] node cpu capacity is 2
	I1025 10:56:23.638516  448482 node_conditions.go:105] duration metric: took 5.323134ms to run NodePressure ...
	I1025 10:56:23.638544  448482 start.go:241] waiting for startup goroutines ...
	I1025 10:56:23.638573  448482 start.go:246] waiting for cluster config update ...
	I1025 10:56:23.638601  448482 start.go:255] writing updated cluster config ...
	I1025 10:56:23.638919  448482 ssh_runner.go:195] Run: rm -f paused
	I1025 10:56:23.646446  448482 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:56:23.654431  448482 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sqrrf" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:56:25.195617  451806 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.37196893s)
	I1025 10:56:25.195665  451806 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.292325867s)
	I1025 10:56:25.196010  451806 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.283023325s)
	I1025 10:56:25.196033  451806 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-223394" to be "Ready" ...
	I1025 10:56:25.196285  451806 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.949733455s)
	I1025 10:56:25.210879  451806 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-223394 addons enable metrics-server
	
	I1025 10:56:25.216975  451806 node_ready.go:49] node "default-k8s-diff-port-223394" is "Ready"
	I1025 10:56:25.216999  451806 node_ready.go:38] duration metric: took 20.954364ms for node "default-k8s-diff-port-223394" to be "Ready" ...
	I1025 10:56:25.217012  451806 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:56:25.217069  451806 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:56:25.243729  451806 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1025 10:56:25.246802  451806 addons.go:514] duration metric: took 6.795559443s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1025 10:56:25.262610  451806 api_server.go:72] duration metric: took 6.81171328s to wait for apiserver process to appear ...
	I1025 10:56:25.262633  451806 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:56:25.262652  451806 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1025 10:56:25.279367  451806 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:56:25.279399  451806 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:56:24.660137  448482 pod_ready.go:94] pod "coredns-66bc5c9577-sqrrf" is "Ready"
	I1025 10:56:24.660162  448482 pod_ready.go:86] duration metric: took 1.005646875s for pod "coredns-66bc5c9577-sqrrf" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:56:24.663290  448482 pod_ready.go:83] waiting for pod "etcd-embed-certs-348342" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:56:24.668549  448482 pod_ready.go:94] pod "etcd-embed-certs-348342" is "Ready"
	I1025 10:56:24.668628  448482 pod_ready.go:86] duration metric: took 5.312393ms for pod "etcd-embed-certs-348342" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:56:24.677209  448482 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-348342" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:56:24.683699  448482 pod_ready.go:94] pod "kube-apiserver-embed-certs-348342" is "Ready"
	I1025 10:56:24.683769  448482 pod_ready.go:86] duration metric: took 6.534402ms for pod "kube-apiserver-embed-certs-348342" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:56:24.687184  448482 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-348342" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:56:24.858876  448482 pod_ready.go:94] pod "kube-controller-manager-embed-certs-348342" is "Ready"
	I1025 10:56:24.858958  448482 pod_ready.go:86] duration metric: took 171.70336ms for pod "kube-controller-manager-embed-certs-348342" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:56:25.058759  448482 pod_ready.go:83] waiting for pod "kube-proxy-j9ngr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:56:25.458905  448482 pod_ready.go:94] pod "kube-proxy-j9ngr" is "Ready"
	I1025 10:56:25.458930  448482 pod_ready.go:86] duration metric: took 400.101134ms for pod "kube-proxy-j9ngr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:56:25.659894  448482 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-348342" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:56:26.059663  448482 pod_ready.go:94] pod "kube-scheduler-embed-certs-348342" is "Ready"
	I1025 10:56:26.059692  448482 pod_ready.go:86] duration metric: took 399.772081ms for pod "kube-scheduler-embed-certs-348342" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:56:26.059705  448482 pod_ready.go:40] duration metric: took 2.413173155s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:56:26.142780  448482 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:56:26.146155  448482 out.go:179] * Done! kubectl is now configured to use "embed-certs-348342" cluster and "default" namespace by default
	I1025 10:56:25.763110  451806 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1025 10:56:25.771062  451806 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1025 10:56:25.772227  451806 api_server.go:141] control plane version: v1.34.1
	I1025 10:56:25.772253  451806 api_server.go:131] duration metric: took 509.613702ms to wait for apiserver health ...
	I1025 10:56:25.772264  451806 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:56:25.775757  451806 system_pods.go:59] 8 kube-system pods found
	I1025 10:56:25.775796  451806 system_pods.go:61] "coredns-66bc5c9577-w9r8g" [83c72429-725c-4c35-bb11-105ba8c376f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:56:25.775806  451806 system_pods.go:61] "etcd-default-k8s-diff-port-223394" [ab9ee066-36cf-49d8-8df1-322149f30734] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:56:25.775813  451806 system_pods.go:61] "kindnet-tclvn" [d1cfc858-b152-45a9-be90-74dff3a44e56] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 10:56:25.775826  451806 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-223394" [d04401de-dd89-47b6-9578-8ebdda939aa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:56:25.775842  451806 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-223394" [7be42c7d-33c1-4768-b784-bb82322b2638] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:56:25.775855  451806 system_pods.go:61] "kube-proxy-zpq57" [ec42d09b-7c2e-41f3-a944-c0551a0a9c52] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 10:56:25.775868  451806 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-223394" [0e489dec-def1-4bb4-b635-18f453c541d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:56:25.775878  451806 system_pods.go:61] "storage-provisioner" [8eadd837-3c03-4cd3-97cb-5f7664d9620a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:56:25.775885  451806 system_pods.go:74] duration metric: took 3.61563ms to wait for pod list to return data ...
	I1025 10:56:25.775893  451806 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:56:25.778550  451806 default_sa.go:45] found service account: "default"
	I1025 10:56:25.778573  451806 default_sa.go:55] duration metric: took 2.672943ms for default service account to be created ...
	I1025 10:56:25.778583  451806 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:56:25.781434  451806 system_pods.go:86] 8 kube-system pods found
	I1025 10:56:25.781469  451806 system_pods.go:89] "coredns-66bc5c9577-w9r8g" [83c72429-725c-4c35-bb11-105ba8c376f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:56:25.781481  451806 system_pods.go:89] "etcd-default-k8s-diff-port-223394" [ab9ee066-36cf-49d8-8df1-322149f30734] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:56:25.781488  451806 system_pods.go:89] "kindnet-tclvn" [d1cfc858-b152-45a9-be90-74dff3a44e56] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 10:56:25.781500  451806 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-223394" [d04401de-dd89-47b6-9578-8ebdda939aa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:56:25.781511  451806 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-223394" [7be42c7d-33c1-4768-b784-bb82322b2638] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:56:25.781518  451806 system_pods.go:89] "kube-proxy-zpq57" [ec42d09b-7c2e-41f3-a944-c0551a0a9c52] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 10:56:25.781524  451806 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-223394" [0e489dec-def1-4bb4-b635-18f453c541d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:56:25.781532  451806 system_pods.go:89] "storage-provisioner" [8eadd837-3c03-4cd3-97cb-5f7664d9620a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:56:25.781543  451806 system_pods.go:126] duration metric: took 2.955014ms to wait for k8s-apps to be running ...
	I1025 10:56:25.781552  451806 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:56:25.781613  451806 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:56:25.795570  451806 system_svc.go:56] duration metric: took 14.007496ms WaitForService to wait for kubelet
	I1025 10:56:25.795597  451806 kubeadm.go:586] duration metric: took 7.344707313s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:56:25.795615  451806 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:56:25.798715  451806 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:56:25.798746  451806 node_conditions.go:123] node cpu capacity is 2
	I1025 10:56:25.798759  451806 node_conditions.go:105] duration metric: took 3.13863ms to run NodePressure ...
	I1025 10:56:25.798772  451806 start.go:241] waiting for startup goroutines ...
	I1025 10:56:25.798786  451806 start.go:246] waiting for cluster config update ...
	I1025 10:56:25.798799  451806 start.go:255] writing updated cluster config ...
	I1025 10:56:25.799080  451806 ssh_runner.go:195] Run: rm -f paused
	I1025 10:56:25.807133  451806 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:56:25.811187  451806 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w9r8g" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:56:27.831417  451806 pod_ready.go:104] pod "coredns-66bc5c9577-w9r8g" is not "Ready", error: <nil>
	W1025 10:56:30.318172  451806 pod_ready.go:104] pod "coredns-66bc5c9577-w9r8g" is not "Ready", error: <nil>
	W1025 10:56:32.818162  451806 pod_ready.go:104] pod "coredns-66bc5c9577-w9r8g" is not "Ready", error: <nil>
	W1025 10:56:35.318307  451806 pod_ready.go:104] pod "coredns-66bc5c9577-w9r8g" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 25 10:56:23 embed-certs-348342 crio[840]: time="2025-10-25T10:56:23.31315362Z" level=info msg="Created container d539648509a1709803620417f67291d480014761fcb4a6ed63a80b3eec9f5b13: kube-system/coredns-66bc5c9577-sqrrf/coredns" id=5314c7bc-527c-4d3b-8f7a-80ab22a82f4f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:56:23 embed-certs-348342 crio[840]: time="2025-10-25T10:56:23.314394986Z" level=info msg="Starting container: d539648509a1709803620417f67291d480014761fcb4a6ed63a80b3eec9f5b13" id=f622532e-8db8-4320-877b-39843a2ef194 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:56:23 embed-certs-348342 crio[840]: time="2025-10-25T10:56:23.326202403Z" level=info msg="Started container" PID=1737 containerID=d539648509a1709803620417f67291d480014761fcb4a6ed63a80b3eec9f5b13 description=kube-system/coredns-66bc5c9577-sqrrf/coredns id=f622532e-8db8-4320-877b-39843a2ef194 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e34c22fd282dcdeedcf2615ccaec48ccbaffb4b156565a23b91787775a159faf
	Oct 25 10:56:26 embed-certs-348342 crio[840]: time="2025-10-25T10:56:26.673676217Z" level=info msg="Running pod sandbox: default/busybox/POD" id=5e268604-a5dc-463f-8875-0b17b379dbd8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:56:26 embed-certs-348342 crio[840]: time="2025-10-25T10:56:26.673757482Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:56:26 embed-certs-348342 crio[840]: time="2025-10-25T10:56:26.679061029Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e526393c734008c855cc7b2c845322974c63291093308d3ac40c7619848e8927 UID:a1179500-843c-448c-966c-265e80b91b4f NetNS:/var/run/netns/0351ef84-81fd-4fca-8f96-0d85648cdb20 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000d1040}] Aliases:map[]}"
	Oct 25 10:56:26 embed-certs-348342 crio[840]: time="2025-10-25T10:56:26.679099733Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 25 10:56:26 embed-certs-348342 crio[840]: time="2025-10-25T10:56:26.69313854Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e526393c734008c855cc7b2c845322974c63291093308d3ac40c7619848e8927 UID:a1179500-843c-448c-966c-265e80b91b4f NetNS:/var/run/netns/0351ef84-81fd-4fca-8f96-0d85648cdb20 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000d1040}] Aliases:map[]}"
	Oct 25 10:56:26 embed-certs-348342 crio[840]: time="2025-10-25T10:56:26.693367801Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 25 10:56:26 embed-certs-348342 crio[840]: time="2025-10-25T10:56:26.700814827Z" level=info msg="Ran pod sandbox e526393c734008c855cc7b2c845322974c63291093308d3ac40c7619848e8927 with infra container: default/busybox/POD" id=5e268604-a5dc-463f-8875-0b17b379dbd8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:56:26 embed-certs-348342 crio[840]: time="2025-10-25T10:56:26.70235537Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5fdd9b5a-b4d5-4a2d-bcca-78612679bf7d name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:56:26 embed-certs-348342 crio[840]: time="2025-10-25T10:56:26.702663828Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=5fdd9b5a-b4d5-4a2d-bcca-78612679bf7d name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:56:26 embed-certs-348342 crio[840]: time="2025-10-25T10:56:26.702798812Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=5fdd9b5a-b4d5-4a2d-bcca-78612679bf7d name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:56:26 embed-certs-348342 crio[840]: time="2025-10-25T10:56:26.704060936Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e604c75b-8ea1-47e1-8ccd-834ad17e13d8 name=/runtime.v1.ImageService/PullImage
	Oct 25 10:56:26 embed-certs-348342 crio[840]: time="2025-10-25T10:56:26.707113222Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 25 10:56:28 embed-certs-348342 crio[840]: time="2025-10-25T10:56:28.80821492Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=e604c75b-8ea1-47e1-8ccd-834ad17e13d8 name=/runtime.v1.ImageService/PullImage
	Oct 25 10:56:28 embed-certs-348342 crio[840]: time="2025-10-25T10:56:28.809384597Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f6ae2a0f-b38d-4966-bc36-fdb65e70f609 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:56:28 embed-certs-348342 crio[840]: time="2025-10-25T10:56:28.814103302Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7b40fa8a-95d5-4ae3-a650-9aca91ae5c6a name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:56:28 embed-certs-348342 crio[840]: time="2025-10-25T10:56:28.823997816Z" level=info msg="Creating container: default/busybox/busybox" id=d50ed3b5-9ec1-43f6-877b-3c21ffa58834 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:56:28 embed-certs-348342 crio[840]: time="2025-10-25T10:56:28.824280018Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:56:28 embed-certs-348342 crio[840]: time="2025-10-25T10:56:28.833825294Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:56:28 embed-certs-348342 crio[840]: time="2025-10-25T10:56:28.834495003Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:56:28 embed-certs-348342 crio[840]: time="2025-10-25T10:56:28.854906314Z" level=info msg="Created container 956626440f3e28bf4c16e9e7e31f54bad7cbeddb25c45a873828dceae8e4bfc7: default/busybox/busybox" id=d50ed3b5-9ec1-43f6-877b-3c21ffa58834 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:56:28 embed-certs-348342 crio[840]: time="2025-10-25T10:56:28.858355782Z" level=info msg="Starting container: 956626440f3e28bf4c16e9e7e31f54bad7cbeddb25c45a873828dceae8e4bfc7" id=d7477759-6a20-4537-96f8-f6c150ad354f name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:56:28 embed-certs-348342 crio[840]: time="2025-10-25T10:56:28.860974201Z" level=info msg="Started container" PID=1793 containerID=956626440f3e28bf4c16e9e7e31f54bad7cbeddb25c45a873828dceae8e4bfc7 description=default/busybox/busybox id=d7477759-6a20-4537-96f8-f6c150ad354f name=/runtime.v1.RuntimeService/StartContainer sandboxID=e526393c734008c855cc7b2c845322974c63291093308d3ac40c7619848e8927
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	956626440f3e2       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   e526393c73400       busybox                                      default
	d539648509a17       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago       Running             coredns                   0                   e34c22fd282dc       coredns-66bc5c9577-sqrrf                     kube-system
	87c45bcc1f87f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago       Running             storage-provisioner       0                   5d3665766dbeb       storage-provisioner                          kube-system
	e044a9bfcfbc5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   d3cbd8f42c0a9       kindnet-q5mzm                                kube-system
	74090a0ce0a2d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   ca535bf2385a6       kube-proxy-j9ngr                             kube-system
	11cc5ece9dd3f       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   a00d08da96dc1       etcd-embed-certs-348342                      kube-system
	b6bb3daf8ca9a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   16400b0337f15       kube-controller-manager-embed-certs-348342   kube-system
	fb5536757a4d7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   34847d7031b26       kube-apiserver-embed-certs-348342            kube-system
	6c47a9c78a98e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   d78b8ac180f3f       kube-scheduler-embed-certs-348342            kube-system
	
	
	==> coredns [d539648509a1709803620417f67291d480014761fcb4a6ed63a80b3eec9f5b13] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50561 - 31313 "HINFO IN 5912666927378943085.7600712621359866716. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01959519s
	
	
	==> describe nodes <==
	Name:               embed-certs-348342
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-348342
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=embed-certs-348342
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_55_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:55:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-348342
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:56:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:56:37 +0000   Sat, 25 Oct 2025 10:55:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:56:37 +0000   Sat, 25 Oct 2025 10:55:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:56:37 +0000   Sat, 25 Oct 2025 10:55:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:56:37 +0000   Sat, 25 Oct 2025 10:56:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-348342
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                16712958-e8b7-42c4-971b-a9b56c3615de
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-sqrrf                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     57s
	  kube-system                 etcd-embed-certs-348342                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         62s
	  kube-system                 kindnet-q5mzm                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-embed-certs-348342             250m (12%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-controller-manager-embed-certs-348342    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-j9ngr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-embed-certs-348342             100m (5%)     0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 55s   kube-proxy       
	  Normal   Starting                 62s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s   kubelet          Node embed-certs-348342 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s   kubelet          Node embed-certs-348342 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s   kubelet          Node embed-certs-348342 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s   node-controller  Node embed-certs-348342 event: Registered Node embed-certs-348342 in Controller
	  Normal   NodeReady                16s   kubelet          Node embed-certs-348342 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct25 10:32] overlayfs: idmapped layers are currently not supported
	[Oct25 10:33] overlayfs: idmapped layers are currently not supported
	[Oct25 10:34] overlayfs: idmapped layers are currently not supported
	[Oct25 10:36] overlayfs: idmapped layers are currently not supported
	[ +23.146409] overlayfs: idmapped layers are currently not supported
	[ +12.090113] overlayfs: idmapped layers are currently not supported
	[Oct25 10:37] overlayfs: idmapped layers are currently not supported
	[ +24.992751] overlayfs: idmapped layers are currently not supported
	[Oct25 10:38] overlayfs: idmapped layers are currently not supported
	[Oct25 10:39] overlayfs: idmapped layers are currently not supported
	[Oct25 10:40] overlayfs: idmapped layers are currently not supported
	[Oct25 10:41] overlayfs: idmapped layers are currently not supported
	[Oct25 10:43] overlayfs: idmapped layers are currently not supported
	[Oct25 10:45] overlayfs: idmapped layers are currently not supported
	[ +32.163680] overlayfs: idmapped layers are currently not supported
	[Oct25 10:47] overlayfs: idmapped layers are currently not supported
	[Oct25 10:49] overlayfs: idmapped layers are currently not supported
	[Oct25 10:50] overlayfs: idmapped layers are currently not supported
	[Oct25 10:51] overlayfs: idmapped layers are currently not supported
	[  +5.236078] overlayfs: idmapped layers are currently not supported
	[Oct25 10:52] overlayfs: idmapped layers are currently not supported
	[Oct25 10:53] overlayfs: idmapped layers are currently not supported
	[Oct25 10:54] overlayfs: idmapped layers are currently not supported
	[Oct25 10:55] overlayfs: idmapped layers are currently not supported
	[Oct25 10:56] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [11cc5ece9dd3f0c4e92e3432978d54efef3afabfc0a6defc408943fea48a4422] <==
	{"level":"warn","ts":"2025-10-25T10:55:32.082357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:55:32.097791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:55:32.118073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:55:32.134382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:55:32.152877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:55:32.171198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:55:32.186750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:55:32.202777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:55:32.218299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:55:32.243598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:55:32.257521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:55:32.277456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:55:32.294086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:55:32.314772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:55:32.340131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:55:32.368630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:55:32.379091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:55:32.402303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:55:32.423304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:55:32.438527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:55:32.466993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:55:32.496950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:55:32.510709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:55:32.531114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:55:32.611959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41368","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:56:38 up  2:39,  0 user,  load average: 3.05, 3.30, 2.86
	Linux embed-certs-348342 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e044a9bfcfbc5eee2df5964fbf49e06b5732afa759743e58293d89403cf44828] <==
	I1025 10:55:42.218622       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:55:42.218877       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 10:55:42.219003       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:55:42.219015       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:55:42.219031       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:55:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:55:42.423555       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:55:42.423595       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:55:42.423604       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:55:42.424448       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:56:12.423993       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 10:56:12.423999       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 10:56:12.424979       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 10:56:12.424997       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1025 10:56:14.127018       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:56:14.127048       1 metrics.go:72] Registering metrics
	I1025 10:56:14.127109       1 controller.go:711] "Syncing nftables rules"
	I1025 10:56:22.426102       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:56:22.426218       1 main.go:301] handling current node
	I1025 10:56:32.425449       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:56:32.425489       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fb5536757a4d7cea4042959e73e7944d61482f13d15e2d5e38c3df418bd1c5e3] <==
	I1025 10:55:33.482591       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1025 10:55:33.484295       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 10:55:33.497063       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 10:55:33.518743       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:55:33.519268       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 10:55:33.533530       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:55:33.583471       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:55:33.583549       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:55:34.292354       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 10:55:34.302825       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 10:55:34.302917       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:55:35.122467       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:55:35.195777       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:55:35.293773       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 10:55:35.302260       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1025 10:55:35.304407       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:55:35.309434       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:55:35.392006       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:55:36.270042       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:55:36.322614       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 10:55:36.336557       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 10:55:40.549264       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:55:40.556497       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:55:41.244652       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:55:41.521795       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [b6bb3daf8ca9af5634d7d5ad298c78aff53a5e87a2445f98d5d2597367e62b14] <==
	I1025 10:55:40.418948       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-348342" podCIDRs=["10.244.0.0/24"]
	I1025 10:55:40.425287       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 10:55:40.425694       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:55:40.437662       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 10:55:40.438871       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:55:40.438891       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:55:40.438902       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:55:40.439289       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 10:55:40.439599       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 10:55:40.442964       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 10:55:40.443062       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 10:55:40.443151       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-348342"
	I1025 10:55:40.447434       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 10:55:40.447529       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 10:55:40.448174       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:55:40.448208       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:55:40.448231       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 10:55:40.448400       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:55:40.448430       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 10:55:40.448913       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 10:55:40.448972       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 10:55:40.458724       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:55:40.458793       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 10:55:40.459557       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 10:56:25.459305       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [74090a0ce0a2d33440e9f969e1b131213c13b0555430d0d3d3574cddc946a0bd] <==
	I1025 10:55:42.146281       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:55:42.239726       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:55:42.340132       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:55:42.340183       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 10:55:42.340253       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:55:42.431899       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:55:42.434151       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:55:42.459488       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:55:42.459800       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:55:42.459818       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:55:42.467002       1 config.go:200] "Starting service config controller"
	I1025 10:55:42.467017       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:55:42.467035       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:55:42.467039       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:55:42.467066       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:55:42.467071       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:55:42.467495       1 config.go:309] "Starting node config controller"
	I1025 10:55:42.467502       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:55:42.467507       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:55:42.567541       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:55:42.567582       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:55:42.567626       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6c47a9c78a98e09583f3b59636c6329762a6426a59c09be87a062f59b118c678] <==
	I1025 10:55:34.598736       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:55:34.602818       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:55:34.603001       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:55:34.603088       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:55:34.603149       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1025 10:55:34.616791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 10:55:34.617228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:55:34.617447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 10:55:34.617495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 10:55:34.617529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:55:34.617568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 10:55:34.617645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 10:55:34.617686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 10:55:34.617729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 10:55:34.617772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 10:55:34.617816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 10:55:34.617849       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:55:34.617892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 10:55:34.617936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 10:55:34.618020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:55:34.618137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 10:55:34.618183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:55:34.618843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:55:34.618980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1025 10:55:35.603330       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:55:37 embed-certs-348342 kubelet[1304]: I1025 10:55:37.521515    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-348342" podStartSLOduration=3.521510884 podStartE2EDuration="3.521510884s" podCreationTimestamp="2025-10-25 10:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:55:37.52120058 +0000 UTC m=+1.386103997" watchObservedRunningTime="2025-10-25 10:55:37.521510884 +0000 UTC m=+1.386414309"
	Oct 25 10:55:40 embed-certs-348342 kubelet[1304]: I1025 10:55:40.444643    1304 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 25 10:55:40 embed-certs-348342 kubelet[1304]: I1025 10:55:40.445323    1304 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 25 10:55:41 embed-certs-348342 kubelet[1304]: I1025 10:55:41.713800    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/946e15f1-043f-4f6e-a995-79bb16033e3d-xtables-lock\") pod \"kube-proxy-j9ngr\" (UID: \"946e15f1-043f-4f6e-a995-79bb16033e3d\") " pod="kube-system/kube-proxy-j9ngr"
	Oct 25 10:55:41 embed-certs-348342 kubelet[1304]: I1025 10:55:41.713946    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4caa08ee-f6f3-442c-ad08-2be933f2869f-cni-cfg\") pod \"kindnet-q5mzm\" (UID: \"4caa08ee-f6f3-442c-ad08-2be933f2869f\") " pod="kube-system/kindnet-q5mzm"
	Oct 25 10:55:41 embed-certs-348342 kubelet[1304]: I1025 10:55:41.713976    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4caa08ee-f6f3-442c-ad08-2be933f2869f-lib-modules\") pod \"kindnet-q5mzm\" (UID: \"4caa08ee-f6f3-442c-ad08-2be933f2869f\") " pod="kube-system/kindnet-q5mzm"
	Oct 25 10:55:41 embed-certs-348342 kubelet[1304]: I1025 10:55:41.714208    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/946e15f1-043f-4f6e-a995-79bb16033e3d-kube-proxy\") pod \"kube-proxy-j9ngr\" (UID: \"946e15f1-043f-4f6e-a995-79bb16033e3d\") " pod="kube-system/kube-proxy-j9ngr"
	Oct 25 10:55:41 embed-certs-348342 kubelet[1304]: I1025 10:55:41.714237    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/946e15f1-043f-4f6e-a995-79bb16033e3d-lib-modules\") pod \"kube-proxy-j9ngr\" (UID: \"946e15f1-043f-4f6e-a995-79bb16033e3d\") " pod="kube-system/kube-proxy-j9ngr"
	Oct 25 10:55:41 embed-certs-348342 kubelet[1304]: I1025 10:55:41.714304    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4caa08ee-f6f3-442c-ad08-2be933f2869f-xtables-lock\") pod \"kindnet-q5mzm\" (UID: \"4caa08ee-f6f3-442c-ad08-2be933f2869f\") " pod="kube-system/kindnet-q5mzm"
	Oct 25 10:55:41 embed-certs-348342 kubelet[1304]: I1025 10:55:41.714361    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhfms\" (UniqueName: \"kubernetes.io/projected/4caa08ee-f6f3-442c-ad08-2be933f2869f-kube-api-access-jhfms\") pod \"kindnet-q5mzm\" (UID: \"4caa08ee-f6f3-442c-ad08-2be933f2869f\") " pod="kube-system/kindnet-q5mzm"
	Oct 25 10:55:41 embed-certs-348342 kubelet[1304]: I1025 10:55:41.714386    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hvjn\" (UniqueName: \"kubernetes.io/projected/946e15f1-043f-4f6e-a995-79bb16033e3d-kube-api-access-9hvjn\") pod \"kube-proxy-j9ngr\" (UID: \"946e15f1-043f-4f6e-a995-79bb16033e3d\") " pod="kube-system/kube-proxy-j9ngr"
	Oct 25 10:55:41 embed-certs-348342 kubelet[1304]: I1025 10:55:41.847308    1304 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 25 10:55:41 embed-certs-348342 kubelet[1304]: W1025 10:55:41.979389    1304 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4/crio-d3cbd8f42c0a9696bdd037a028d0530ad586827ae7b5f7e4aaa69214e380d867 WatchSource:0}: Error finding container d3cbd8f42c0a9696bdd037a028d0530ad586827ae7b5f7e4aaa69214e380d867: Status 404 returned error can't find the container with id d3cbd8f42c0a9696bdd037a028d0530ad586827ae7b5f7e4aaa69214e380d867
	Oct 25 10:55:42 embed-certs-348342 kubelet[1304]: I1025 10:55:42.396438    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-q5mzm" podStartSLOduration=1.396421069 podStartE2EDuration="1.396421069s" podCreationTimestamp="2025-10-25 10:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:55:42.395815976 +0000 UTC m=+6.260719385" watchObservedRunningTime="2025-10-25 10:55:42.396421069 +0000 UTC m=+6.261324478"
	Oct 25 10:55:43 embed-certs-348342 kubelet[1304]: I1025 10:55:43.163973    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j9ngr" podStartSLOduration=2.163952597 podStartE2EDuration="2.163952597s" podCreationTimestamp="2025-10-25 10:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:55:42.42793311 +0000 UTC m=+6.292836527" watchObservedRunningTime="2025-10-25 10:55:43.163952597 +0000 UTC m=+7.028856006"
	Oct 25 10:56:22 embed-certs-348342 kubelet[1304]: I1025 10:56:22.720624    1304 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 10:56:22 embed-certs-348342 kubelet[1304]: I1025 10:56:22.862132    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2zkg\" (UniqueName: \"kubernetes.io/projected/9a91278c-945c-48bc-be8b-39e026d485b4-kube-api-access-p2zkg\") pod \"storage-provisioner\" (UID: \"9a91278c-945c-48bc-be8b-39e026d485b4\") " pod="kube-system/storage-provisioner"
	Oct 25 10:56:22 embed-certs-348342 kubelet[1304]: I1025 10:56:22.862357    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9a91278c-945c-48bc-be8b-39e026d485b4-tmp\") pod \"storage-provisioner\" (UID: \"9a91278c-945c-48bc-be8b-39e026d485b4\") " pod="kube-system/storage-provisioner"
	Oct 25 10:56:22 embed-certs-348342 kubelet[1304]: I1025 10:56:22.963517    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15846173-f49c-4d50-af52-3b1b371fde43-config-volume\") pod \"coredns-66bc5c9577-sqrrf\" (UID: \"15846173-f49c-4d50-af52-3b1b371fde43\") " pod="kube-system/coredns-66bc5c9577-sqrrf"
	Oct 25 10:56:22 embed-certs-348342 kubelet[1304]: I1025 10:56:22.963767    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg4dl\" (UniqueName: \"kubernetes.io/projected/15846173-f49c-4d50-af52-3b1b371fde43-kube-api-access-mg4dl\") pod \"coredns-66bc5c9577-sqrrf\" (UID: \"15846173-f49c-4d50-af52-3b1b371fde43\") " pod="kube-system/coredns-66bc5c9577-sqrrf"
	Oct 25 10:56:23 embed-certs-348342 kubelet[1304]: W1025 10:56:23.187986    1304 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4/crio-e34c22fd282dcdeedcf2615ccaec48ccbaffb4b156565a23b91787775a159faf WatchSource:0}: Error finding container e34c22fd282dcdeedcf2615ccaec48ccbaffb4b156565a23b91787775a159faf: Status 404 returned error can't find the container with id e34c22fd282dcdeedcf2615ccaec48ccbaffb4b156565a23b91787775a159faf
	Oct 25 10:56:23 embed-certs-348342 kubelet[1304]: I1025 10:56:23.532677    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sqrrf" podStartSLOduration=42.532654814 podStartE2EDuration="42.532654814s" podCreationTimestamp="2025-10-25 10:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:56:23.505917898 +0000 UTC m=+47.370821307" watchObservedRunningTime="2025-10-25 10:56:23.532654814 +0000 UTC m=+47.397558231"
	Oct 25 10:56:24 embed-certs-348342 kubelet[1304]: I1025 10:56:24.486942    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.48692034 podStartE2EDuration="42.48692034s" podCreationTimestamp="2025-10-25 10:55:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:56:23.532974619 +0000 UTC m=+47.397878037" watchObservedRunningTime="2025-10-25 10:56:24.48692034 +0000 UTC m=+48.351823748"
	Oct 25 10:56:26 embed-certs-348342 kubelet[1304]: I1025 10:56:26.494439    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljw74\" (UniqueName: \"kubernetes.io/projected/a1179500-843c-448c-966c-265e80b91b4f-kube-api-access-ljw74\") pod \"busybox\" (UID: \"a1179500-843c-448c-966c-265e80b91b4f\") " pod="default/busybox"
	Oct 25 10:56:26 embed-certs-348342 kubelet[1304]: W1025 10:56:26.699583    1304 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4/crio-e526393c734008c855cc7b2c845322974c63291093308d3ac40c7619848e8927 WatchSource:0}: Error finding container e526393c734008c855cc7b2c845322974c63291093308d3ac40c7619848e8927: Status 404 returned error can't find the container with id e526393c734008c855cc7b2c845322974c63291093308d3ac40c7619848e8927
	
	
	==> storage-provisioner [87c45bcc1f87ffa18e6932431b64fe79a0615b4ffad9e7d77c41b8f1b1b41cea] <==
	I1025 10:56:23.300264       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:56:23.318184       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:56:23.318304       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 10:56:23.345520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:56:23.363613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:56:23.363914       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:56:23.364168       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-348342_dd7ddb03-d47c-448a-a54c-47694c283a9d!
	I1025 10:56:23.374142       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7ec49e55-bd27-4484-99d7-316a9176b2fc", APIVersion:"v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-348342_dd7ddb03-d47c-448a-a54c-47694c283a9d became leader
	W1025 10:56:23.378041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:56:23.395350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:56:23.469225       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-348342_dd7ddb03-d47c-448a-a54c-47694c283a9d!
	W1025 10:56:25.412429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:56:25.421416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:56:27.424619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:56:27.428969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:56:29.432194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:56:29.437314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:56:31.440918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:56:31.446825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:56:33.450260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:56:33.458225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:56:35.461196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:56:35.466926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:56:37.470257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:56:37.478476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-348342 -n embed-certs-348342
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-348342 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-223394 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-223394 --alsologtostderr -v=1: exit status 80 (2.418370019s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-223394 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:57:16.921945  456882 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:57:16.922203  456882 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:57:16.922210  456882 out.go:374] Setting ErrFile to fd 2...
	I1025 10:57:16.922215  456882 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:57:16.922503  456882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:57:16.922746  456882 out.go:368] Setting JSON to false
	I1025 10:57:16.922767  456882 mustload.go:65] Loading cluster: default-k8s-diff-port-223394
	I1025 10:57:16.923140  456882 config.go:182] Loaded profile config "default-k8s-diff-port-223394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:57:16.923626  456882 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-223394 --format={{.State.Status}}
	I1025 10:57:16.953870  456882 host.go:66] Checking if "default-k8s-diff-port-223394" exists ...
	I1025 10:57:16.954317  456882 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:57:17.058409  456882 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-25 10:57:17.048593855 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:57:17.059064  456882 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-223394 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 10:57:17.063952  456882 out.go:179] * Pausing node default-k8s-diff-port-223394 ... 
	I1025 10:57:17.068550  456882 host.go:66] Checking if "default-k8s-diff-port-223394" exists ...
	I1025 10:57:17.068899  456882 ssh_runner.go:195] Run: systemctl --version
	I1025 10:57:17.068956  456882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-223394
	I1025 10:57:17.109870  456882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/default-k8s-diff-port-223394/id_rsa Username:docker}
	I1025 10:57:17.228121  456882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:57:17.244140  456882 pause.go:52] kubelet running: true
	I1025 10:57:17.244231  456882 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:57:17.593203  456882 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:57:17.593297  456882 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:57:17.701202  456882 cri.go:89] found id: "9ae681564efc76dca39400bd2a4f79850bccb015cd2dac17da552bc3b801e930"
	I1025 10:57:17.701235  456882 cri.go:89] found id: "87d869504793063a19919eff743283ee2b55be58b9d8352930f36eb27a405677"
	I1025 10:57:17.701240  456882 cri.go:89] found id: "8fe3eb331d0de316a3705b7640f3dfebe0d4ca0136afef7d77f67dcd835bce76"
	I1025 10:57:17.701245  456882 cri.go:89] found id: "07de50a4075df0e36d34be8ef0e96165ff750994856c24e4902fe73fc3fcb1fb"
	I1025 10:57:17.701248  456882 cri.go:89] found id: "f62f9dca6b34ebcc637ed54376e046a6d45148e3c61defb45110e0d36387c285"
	I1025 10:57:17.701252  456882 cri.go:89] found id: "eb487ee4e10f68f40f25f7e75f8231d3678bb16df616131e9dd7d0bbf8f2f3ed"
	I1025 10:57:17.701255  456882 cri.go:89] found id: "6c982a01974bebe010fd07605d0e7e6f34d2e021c6ffb16dedef170e47c26875"
	I1025 10:57:17.701258  456882 cri.go:89] found id: "9779636f70f0c278cba11f390d72d18ecf6492c685c187c54ca454f436e08653"
	I1025 10:57:17.701270  456882 cri.go:89] found id: "c82e104c40d0dc2552e92b2571bdc6ca33dc11c21c904ce5b807e393939d0fe1"
	I1025 10:57:17.701277  456882 cri.go:89] found id: "880c32c6e4d36298f25604e265d4a1cd58e474073d844950f788323abad6cb80"
	I1025 10:57:17.701284  456882 cri.go:89] found id: "006247d5cea81537221f79175bf519982dcf6cbc03bf6367c41f687ef833cf21"
	I1025 10:57:17.701287  456882 cri.go:89] found id: ""
	I1025 10:57:17.701344  456882 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:57:17.719919  456882 retry.go:31] will retry after 275.437996ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:57:17Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:57:17.996212  456882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:57:18.018310  456882 pause.go:52] kubelet running: false
	I1025 10:57:18.018415  456882 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:57:18.343479  456882 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:57:18.343570  456882 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:57:18.468216  456882 cri.go:89] found id: "9ae681564efc76dca39400bd2a4f79850bccb015cd2dac17da552bc3b801e930"
	I1025 10:57:18.468285  456882 cri.go:89] found id: "87d869504793063a19919eff743283ee2b55be58b9d8352930f36eb27a405677"
	I1025 10:57:18.468308  456882 cri.go:89] found id: "8fe3eb331d0de316a3705b7640f3dfebe0d4ca0136afef7d77f67dcd835bce76"
	I1025 10:57:18.468332  456882 cri.go:89] found id: "07de50a4075df0e36d34be8ef0e96165ff750994856c24e4902fe73fc3fcb1fb"
	I1025 10:57:18.468364  456882 cri.go:89] found id: "f62f9dca6b34ebcc637ed54376e046a6d45148e3c61defb45110e0d36387c285"
	I1025 10:57:18.468391  456882 cri.go:89] found id: "eb487ee4e10f68f40f25f7e75f8231d3678bb16df616131e9dd7d0bbf8f2f3ed"
	I1025 10:57:18.468430  456882 cri.go:89] found id: "6c982a01974bebe010fd07605d0e7e6f34d2e021c6ffb16dedef170e47c26875"
	I1025 10:57:18.468449  456882 cri.go:89] found id: "9779636f70f0c278cba11f390d72d18ecf6492c685c187c54ca454f436e08653"
	I1025 10:57:18.468476  456882 cri.go:89] found id: "c82e104c40d0dc2552e92b2571bdc6ca33dc11c21c904ce5b807e393939d0fe1"
	I1025 10:57:18.468504  456882 cri.go:89] found id: "880c32c6e4d36298f25604e265d4a1cd58e474073d844950f788323abad6cb80"
	I1025 10:57:18.468528  456882 cri.go:89] found id: "006247d5cea81537221f79175bf519982dcf6cbc03bf6367c41f687ef833cf21"
	I1025 10:57:18.468552  456882 cri.go:89] found id: ""
	I1025 10:57:18.468640  456882 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:57:18.486448  456882 retry.go:31] will retry after 476.147907ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:57:18Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:57:18.963353  456882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:57:18.980442  456882 pause.go:52] kubelet running: false
	I1025 10:57:18.980513  456882 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:57:19.164898  456882 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:57:19.165023  456882 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:57:19.240206  456882 cri.go:89] found id: "9ae681564efc76dca39400bd2a4f79850bccb015cd2dac17da552bc3b801e930"
	I1025 10:57:19.240238  456882 cri.go:89] found id: "87d869504793063a19919eff743283ee2b55be58b9d8352930f36eb27a405677"
	I1025 10:57:19.240244  456882 cri.go:89] found id: "8fe3eb331d0de316a3705b7640f3dfebe0d4ca0136afef7d77f67dcd835bce76"
	I1025 10:57:19.240247  456882 cri.go:89] found id: "07de50a4075df0e36d34be8ef0e96165ff750994856c24e4902fe73fc3fcb1fb"
	I1025 10:57:19.240251  456882 cri.go:89] found id: "f62f9dca6b34ebcc637ed54376e046a6d45148e3c61defb45110e0d36387c285"
	I1025 10:57:19.240255  456882 cri.go:89] found id: "eb487ee4e10f68f40f25f7e75f8231d3678bb16df616131e9dd7d0bbf8f2f3ed"
	I1025 10:57:19.240267  456882 cri.go:89] found id: "6c982a01974bebe010fd07605d0e7e6f34d2e021c6ffb16dedef170e47c26875"
	I1025 10:57:19.240272  456882 cri.go:89] found id: "9779636f70f0c278cba11f390d72d18ecf6492c685c187c54ca454f436e08653"
	I1025 10:57:19.240281  456882 cri.go:89] found id: "c82e104c40d0dc2552e92b2571bdc6ca33dc11c21c904ce5b807e393939d0fe1"
	I1025 10:57:19.240303  456882 cri.go:89] found id: "880c32c6e4d36298f25604e265d4a1cd58e474073d844950f788323abad6cb80"
	I1025 10:57:19.240306  456882 cri.go:89] found id: "006247d5cea81537221f79175bf519982dcf6cbc03bf6367c41f687ef833cf21"
	I1025 10:57:19.240309  456882 cri.go:89] found id: ""
	I1025 10:57:19.240365  456882 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:57:19.260373  456882 out.go:203] 
	W1025 10:57:19.264429  456882 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:57:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:57:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 10:57:19.264456  456882 out.go:285] * 
	* 
	W1025 10:57:19.270220  456882 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 10:57:19.273958  456882 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-223394 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-223394
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-223394:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fdfe0713435ec0c9d963f0d524447bf680a521a643dff88e212a946e59a268a7",
	        "Created": "2025-10-25T10:54:33.801036185Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 451934,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:56:10.658387728Z",
	            "FinishedAt": "2025-10-25T10:56:09.79862304Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/fdfe0713435ec0c9d963f0d524447bf680a521a643dff88e212a946e59a268a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fdfe0713435ec0c9d963f0d524447bf680a521a643dff88e212a946e59a268a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/fdfe0713435ec0c9d963f0d524447bf680a521a643dff88e212a946e59a268a7/hosts",
	        "LogPath": "/var/lib/docker/containers/fdfe0713435ec0c9d963f0d524447bf680a521a643dff88e212a946e59a268a7/fdfe0713435ec0c9d963f0d524447bf680a521a643dff88e212a946e59a268a7-json.log",
	        "Name": "/default-k8s-diff-port-223394",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-223394:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-223394",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fdfe0713435ec0c9d963f0d524447bf680a521a643dff88e212a946e59a268a7",
	                "LowerDir": "/var/lib/docker/overlay2/16c3b01afa0b4ec1bbf75b73359cd04d1fc7ed7d6a6cc96f08daeb4bea593cde-init/diff:/var/lib/docker/overlay2/a746402fafa86965bd9394d930bb96ec16982037f9f5e7f4f1de6d09576ff851/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16c3b01afa0b4ec1bbf75b73359cd04d1fc7ed7d6a6cc96f08daeb4bea593cde/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16c3b01afa0b4ec1bbf75b73359cd04d1fc7ed7d6a6cc96f08daeb4bea593cde/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16c3b01afa0b4ec1bbf75b73359cd04d1fc7ed7d6a6cc96f08daeb4bea593cde/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-223394",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-223394/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-223394",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-223394",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-223394",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9d3a8b9d3048d7a09346a8eed98172f6a3da0497ffaf764f484013c0046e47f2",
	            "SandboxKey": "/var/run/docker/netns/9d3a8b9d3048",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-223394": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:af:d1:33:2c:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f8140ea88edc3e6f9170c2a8375ca78b30531642cc0a79f4070e57085e0519f4",
	                    "EndpointID": "a5d34eab1933401a98868a8e62cc481865c3d5f086d92515332cca2ae98779cc",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-223394",
	                        "fdfe0713435e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-223394 -n default-k8s-diff-port-223394
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-223394 -n default-k8s-diff-port-223394: exit status 2 (399.80808ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-223394 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-223394 logs -n 25: (1.422758263s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-771620 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-771620          │ jenkins │ v1.37.0 │ 25 Oct 25 10:51 UTC │ 25 Oct 25 10:51 UTC │
	│ delete  │ -p cert-options-771620                                                                                                                                                                                                                        │ cert-options-771620          │ jenkins │ v1.37.0 │ 25 Oct 25 10:51 UTC │ 25 Oct 25 10:51 UTC │
	│ start   │ -p old-k8s-version-031983 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:51 UTC │ 25 Oct 25 10:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-031983 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:52 UTC │                     │
	│ stop    │ -p old-k8s-version-031983 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:53 UTC │ 25 Oct 25 10:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-031983 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:53 UTC │ 25 Oct 25 10:53 UTC │
	│ start   │ -p old-k8s-version-031983 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:53 UTC │ 25 Oct 25 10:54 UTC │
	│ image   │ old-k8s-version-031983 image list --format=json                                                                                                                                                                                               │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:54 UTC │
	│ pause   │ -p old-k8s-version-031983 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │                     │
	│ delete  │ -p old-k8s-version-031983                                                                                                                                                                                                                     │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:54 UTC │
	│ delete  │ -p old-k8s-version-031983                                                                                                                                                                                                                     │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:54 UTC │
	│ start   │ -p default-k8s-diff-port-223394 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:55 UTC │
	│ start   │ -p cert-expiration-736062 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-736062       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:55 UTC │
	│ delete  │ -p cert-expiration-736062                                                                                                                                                                                                                     │ cert-expiration-736062       │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │ 25 Oct 25 10:55 UTC │
	│ start   │ -p embed-certs-348342 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │ 25 Oct 25 10:56 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-223394 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-223394 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │ 25 Oct 25 10:56 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-223394 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:56 UTC │
	│ start   │ -p default-k8s-diff-port-223394 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-348342 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │                     │
	│ stop    │ -p embed-certs-348342 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-348342 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:56 UTC │
	│ start   │ -p embed-certs-348342 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │                     │
	│ image   │ default-k8s-diff-port-223394 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ pause   │ -p default-k8s-diff-port-223394 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:56:51
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:56:51.596284  454751 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:56:51.596406  454751 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:56:51.596418  454751 out.go:374] Setting ErrFile to fd 2...
	I1025 10:56:51.596424  454751 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:56:51.596739  454751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:56:51.597129  454751 out.go:368] Setting JSON to false
	I1025 10:56:51.598263  454751 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9563,"bootTime":1761380249,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 10:56:51.598338  454751 start.go:141] virtualization:  
	I1025 10:56:51.603483  454751 out.go:179] * [embed-certs-348342] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:56:51.606747  454751 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:56:51.606790  454751 notify.go:220] Checking for updates...
	I1025 10:56:51.613384  454751 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:56:51.616591  454751 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:56:51.619931  454751 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 10:56:51.623157  454751 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:56:51.626214  454751 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:56:51.629791  454751 config.go:182] Loaded profile config "embed-certs-348342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:56:51.630456  454751 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:56:51.663255  454751 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:56:51.663447  454751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:56:51.723736  454751 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:56:51.713833036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:56:51.723844  454751 docker.go:318] overlay module found
	I1025 10:56:51.726998  454751 out.go:179] * Using the docker driver based on existing profile
	I1025 10:56:51.729815  454751 start.go:305] selected driver: docker
	I1025 10:56:51.729835  454751 start.go:925] validating driver "docker" against &{Name:embed-certs-348342 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-348342 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:56:51.729949  454751 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:56:51.730811  454751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:56:51.787063  454751 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:56:51.777393267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:56:51.787413  454751 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:56:51.787445  454751 cni.go:84] Creating CNI manager for ""
	I1025 10:56:51.787506  454751 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:56:51.787547  454751 start.go:349] cluster config:
	{Name:embed-certs-348342 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-348342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:56:51.792524  454751 out.go:179] * Starting "embed-certs-348342" primary control-plane node in "embed-certs-348342" cluster
	I1025 10:56:51.795363  454751 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:56:51.798268  454751 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:56:51.801046  454751 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:56:51.801116  454751 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 10:56:51.801130  454751 cache.go:58] Caching tarball of preloaded images
	I1025 10:56:51.801137  454751 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:56:51.801295  454751 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:56:51.801323  454751 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:56:51.801498  454751 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/config.json ...
	I1025 10:56:51.823203  454751 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:56:51.823228  454751 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:56:51.823247  454751 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:56:51.823279  454751 start.go:360] acquireMachinesLock for embed-certs-348342: {Name:mk6a33c3a0d7242e8af53b027ee4f0bef4d472df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:56:51.823344  454751 start.go:364] duration metric: took 38.769µs to acquireMachinesLock for "embed-certs-348342"
	I1025 10:56:51.823368  454751 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:56:51.823376  454751 fix.go:54] fixHost starting: 
	I1025 10:56:51.823644  454751 cli_runner.go:164] Run: docker container inspect embed-certs-348342 --format={{.State.Status}}
	I1025 10:56:51.840608  454751 fix.go:112] recreateIfNeeded on embed-certs-348342: state=Stopped err=<nil>
	W1025 10:56:51.840660  454751 fix.go:138] unexpected machine state, will restart: <nil>
	W1025 10:56:51.318451  451806 pod_ready.go:104] pod "coredns-66bc5c9577-w9r8g" is not "Ready", error: <nil>
	W1025 10:56:53.816466  451806 pod_ready.go:104] pod "coredns-66bc5c9577-w9r8g" is not "Ready", error: <nil>
	I1025 10:56:51.843806  454751 out.go:252] * Restarting existing docker container for "embed-certs-348342" ...
	I1025 10:56:51.843894  454751 cli_runner.go:164] Run: docker start embed-certs-348342
	I1025 10:56:52.124823  454751 cli_runner.go:164] Run: docker container inspect embed-certs-348342 --format={{.State.Status}}
	I1025 10:56:52.147848  454751 kic.go:430] container "embed-certs-348342" state is running.
	I1025 10:56:52.150639  454751 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-348342
	I1025 10:56:52.180436  454751 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/config.json ...
	I1025 10:56:52.180700  454751 machine.go:93] provisionDockerMachine start ...
	I1025 10:56:52.180774  454751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:56:52.200616  454751 main.go:141] libmachine: Using SSH client type: native
	I1025 10:56:52.200946  454751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1025 10:56:52.200958  454751 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:56:52.201581  454751 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 10:56:55.356290  454751 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-348342
	
	I1025 10:56:55.356313  454751 ubuntu.go:182] provisioning hostname "embed-certs-348342"
	I1025 10:56:55.356405  454751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:56:55.376664  454751 main.go:141] libmachine: Using SSH client type: native
	I1025 10:56:55.376975  454751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1025 10:56:55.376992  454751 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-348342 && echo "embed-certs-348342" | sudo tee /etc/hostname
	I1025 10:56:55.540924  454751 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-348342
	
	I1025 10:56:55.541007  454751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:56:55.559401  454751 main.go:141] libmachine: Using SSH client type: native
	I1025 10:56:55.559724  454751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1025 10:56:55.559749  454751 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-348342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-348342/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-348342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:56:55.714279  454751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:56:55.714309  454751 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:56:55.714388  454751 ubuntu.go:190] setting up certificates
	I1025 10:56:55.714415  454751 provision.go:84] configureAuth start
	I1025 10:56:55.714499  454751 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-348342
	I1025 10:56:55.731441  454751 provision.go:143] copyHostCerts
	I1025 10:56:55.731511  454751 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:56:55.731536  454751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:56:55.731620  454751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:56:55.731726  454751 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:56:55.731737  454751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:56:55.731765  454751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:56:55.731831  454751 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:56:55.731842  454751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:56:55.731866  454751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:56:55.731926  454751 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.embed-certs-348342 san=[127.0.0.1 192.168.76.2 embed-certs-348342 localhost minikube]
	I1025 10:56:56.414963  454751 provision.go:177] copyRemoteCerts
	I1025 10:56:56.415039  454751 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:56:56.415082  454751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:56:56.433167  454751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/embed-certs-348342/id_rsa Username:docker}
	I1025 10:56:56.538049  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:56:56.558525  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1025 10:56:56.577561  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:56:56.596044  454751 provision.go:87] duration metric: took 881.592167ms to configureAuth
	I1025 10:56:56.596074  454751 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:56:56.596273  454751 config.go:182] Loaded profile config "embed-certs-348342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:56:56.596380  454751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:56:56.614633  454751 main.go:141] libmachine: Using SSH client type: native
	I1025 10:56:56.614969  454751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1025 10:56:56.614996  454751 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:56:56.975554  454751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:56:56.975573  454751 machine.go:96] duration metric: took 4.794859368s to provisionDockerMachine
	I1025 10:56:56.975584  454751 start.go:293] postStartSetup for "embed-certs-348342" (driver="docker")
	I1025 10:56:56.975595  454751 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:56:56.975669  454751 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:56:56.975710  454751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:56:56.997000  454751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/embed-certs-348342/id_rsa Username:docker}
	I1025 10:56:57.106188  454751 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:56:57.109511  454751 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:56:57.109591  454751 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:56:57.109618  454751 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:56:57.109681  454751 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:56:57.109769  454751 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:56:57.109877  454751 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:56:57.117339  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:56:57.136927  454751 start.go:296] duration metric: took 161.326672ms for postStartSetup
	I1025 10:56:57.137008  454751 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:56:57.137059  454751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:56:57.154756  454751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/embed-certs-348342/id_rsa Username:docker}
	I1025 10:56:57.262837  454751 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:56:57.268733  454751 fix.go:56] duration metric: took 5.445349479s for fixHost
	I1025 10:56:57.268757  454751 start.go:83] releasing machines lock for "embed-certs-348342", held for 5.445400162s
	I1025 10:56:57.268839  454751 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-348342
	I1025 10:56:57.285928  454751 ssh_runner.go:195] Run: cat /version.json
	I1025 10:56:57.286042  454751 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:56:57.286073  454751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:56:57.286119  454751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:56:57.303771  454751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/embed-certs-348342/id_rsa Username:docker}
	I1025 10:56:57.306253  454751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/embed-certs-348342/id_rsa Username:docker}
	I1025 10:56:57.413548  454751 ssh_runner.go:195] Run: systemctl --version
	I1025 10:56:57.533249  454751 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:56:57.577023  454751 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:56:57.581930  454751 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:56:57.582072  454751 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:56:57.591277  454751 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:56:57.591304  454751 start.go:495] detecting cgroup driver to use...
	I1025 10:56:57.591373  454751 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:56:57.591451  454751 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:56:57.607174  454751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:56:57.620561  454751 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:56:57.620654  454751 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:56:57.637281  454751 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:56:57.655757  454751 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:56:57.793765  454751 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:56:57.925017  454751 docker.go:234] disabling docker service ...
	I1025 10:56:57.925096  454751 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:56:57.941915  454751 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:56:57.955616  454751 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:56:58.084465  454751 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:56:58.216579  454751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:56:58.233280  454751 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:56:58.248840  454751 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:56:58.248925  454751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:56:58.265186  454751 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:56:58.265266  454751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:56:58.274415  454751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:56:58.283616  454751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:56:58.292866  454751 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:56:58.301170  454751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:56:58.310509  454751 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:56:58.320244  454751 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:56:58.330195  454751 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:56:58.338866  454751 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:56:58.347150  454751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:56:58.472799  454751 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:56:58.607302  454751 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:56:58.607425  454751 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:56:58.611913  454751 start.go:563] Will wait 60s for crictl version
	I1025 10:56:58.612019  454751 ssh_runner.go:195] Run: which crictl
	I1025 10:56:58.615893  454751 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:56:58.644417  454751 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:56:58.644591  454751 ssh_runner.go:195] Run: crio --version
	I1025 10:56:58.676454  454751 ssh_runner.go:195] Run: crio --version
	I1025 10:56:58.715607  454751 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:56:58.718778  454751 cli_runner.go:164] Run: docker network inspect embed-certs-348342 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:56:58.735770  454751 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 10:56:58.739775  454751 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:56:58.749949  454751 kubeadm.go:883] updating cluster {Name:embed-certs-348342 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-348342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:56:58.750091  454751 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:56:58.750165  454751 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:56:58.782993  454751 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:56:58.783017  454751 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:56:58.783080  454751 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:56:58.810614  454751 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:56:58.810640  454751 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:56:58.810648  454751 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1025 10:56:58.810765  454751 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-348342 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-348342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:56:58.810864  454751 ssh_runner.go:195] Run: crio config
	I1025 10:56:58.877932  454751 cni.go:84] Creating CNI manager for ""
	I1025 10:56:58.877954  454751 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:56:58.877973  454751 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:56:58.878008  454751 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-348342 NodeName:embed-certs-348342 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:56:58.878147  454751 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-348342"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:56:58.878221  454751 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:56:58.886957  454751 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:56:58.887033  454751 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:56:58.894538  454751 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1025 10:56:58.907793  454751 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:56:58.920336  454751 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1025 10:56:58.933581  454751 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:56:58.937579  454751 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:56:58.948734  454751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:56:59.079549  454751 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:56:59.098080  454751 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342 for IP: 192.168.76.2
	I1025 10:56:59.098156  454751 certs.go:195] generating shared ca certs ...
	I1025 10:56:59.098189  454751 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:56:59.098384  454751 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:56:59.098491  454751 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:56:59.098517  454751 certs.go:257] generating profile certs ...
	I1025 10:56:59.098658  454751 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/client.key
	I1025 10:56:59.098759  454751 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/apiserver.key.6c3cab22
	I1025 10:56:59.098837  454751 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/proxy-client.key
	I1025 10:56:59.098984  454751 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:56:59.099057  454751 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:56:59.099083  454751 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:56:59.099143  454751 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:56:59.099195  454751 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:56:59.099253  454751 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:56:59.099328  454751 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:56:59.100168  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:56:59.121277  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:56:59.141852  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:56:59.165080  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:56:59.190103  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 10:56:59.210733  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:56:59.236805  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:56:59.276626  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1025 10:56:59.298913  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:56:59.327585  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:56:59.349122  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:56:59.369059  454751 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:56:59.383559  454751 ssh_runner.go:195] Run: openssl version
	I1025 10:56:59.392524  454751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:56:59.402789  454751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:56:59.407259  454751 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:56:59.407356  454751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:56:59.450710  454751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:56:59.459413  454751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:56:59.468884  454751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:56:59.472860  454751 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:56:59.472966  454751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:56:59.515196  454751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:56:59.523173  454751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:56:59.531373  454751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:56:59.535474  454751 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:56:59.535619  454751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:56:59.577815  454751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:56:59.585741  454751 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:56:59.589697  454751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:56:59.632511  454751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:56:59.674564  454751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:56:59.720113  454751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:56:59.779332  454751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:56:59.849092  454751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:56:59.922105  454751 kubeadm.go:400] StartCluster: {Name:embed-certs-348342 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-348342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:56:59.922193  454751 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:56:59.922265  454751 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:56:59.963052  454751 cri.go:89] found id: "4a176e83f06702f09feac763002a74b8b8a030874adc921f8bddd98aa3c974d4"
	I1025 10:56:59.963076  454751 cri.go:89] found id: "8fcdfc5fc2dc75f67348b352c94dacbcef58121b8688bd5a6ea85732681228cd"
	I1025 10:56:59.963082  454751 cri.go:89] found id: "c70dd3ad27c72e73d7f22a0f8ce5472875ecc49420f54d9480a48af44851b43d"
	I1025 10:56:59.963093  454751 cri.go:89] found id: "9e869b3a7afbb096c23279c50a357f29f02843cd43be8ae3176e4dc15d9e713d"
	I1025 10:56:59.963097  454751 cri.go:89] found id: ""
	I1025 10:56:59.963146  454751 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:56:59.975514  454751 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:56:59Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:56:59.975607  454751 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:56:59.987066  454751 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:56:59.987086  454751 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:56:59.987142  454751 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:56:59.997073  454751 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:56:59.997765  454751 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-348342" does not appear in /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:56:59.998091  454751 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-259409/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-348342" cluster setting kubeconfig missing "embed-certs-348342" context setting]
	I1025 10:56:59.998585  454751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:57:00.000488  454751 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:57:00.016591  454751 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1025 10:57:00.016631  454751 kubeadm.go:601] duration metric: took 29.537909ms to restartPrimaryControlPlane
	I1025 10:57:00.016643  454751 kubeadm.go:402] duration metric: took 94.54735ms to StartCluster
	I1025 10:57:00.016664  454751 settings.go:142] acquiring lock: {Name:mk78071aea90144fb2e8f63b90de4792d7a3522a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:57:00.016748  454751 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:57:00.018171  454751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:57:00.018509  454751 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:57:00.019059  454751 config.go:182] Loaded profile config "embed-certs-348342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:57:00.019020  454751 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:57:00.019116  454751 addons.go:69] Setting dashboard=true in profile "embed-certs-348342"
	I1025 10:57:00.019126  454751 addons.go:69] Setting default-storageclass=true in profile "embed-certs-348342"
	I1025 10:57:00.019132  454751 addons.go:238] Setting addon dashboard=true in "embed-certs-348342"
	W1025 10:57:00.019139  454751 addons.go:247] addon dashboard should already be in state true
	I1025 10:57:00.019138  454751 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-348342"
	I1025 10:57:00.019167  454751 host.go:66] Checking if "embed-certs-348342" exists ...
	I1025 10:57:00.019477  454751 cli_runner.go:164] Run: docker container inspect embed-certs-348342 --format={{.State.Status}}
	I1025 10:57:00.019687  454751 cli_runner.go:164] Run: docker container inspect embed-certs-348342 --format={{.State.Status}}
	I1025 10:57:00.019116  454751 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-348342"
	I1025 10:57:00.024347  454751 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-348342"
	W1025 10:57:00.025188  454751 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:57:00.025249  454751 host.go:66] Checking if "embed-certs-348342" exists ...
	I1025 10:57:00.029384  454751 cli_runner.go:164] Run: docker container inspect embed-certs-348342 --format={{.State.Status}}
	I1025 10:57:00.025165  454751 out.go:179] * Verifying Kubernetes components...
	I1025 10:57:00.033027  454751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:57:00.074936  454751 addons.go:238] Setting addon default-storageclass=true in "embed-certs-348342"
	W1025 10:57:00.074971  454751 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:57:00.075000  454751 host.go:66] Checking if "embed-certs-348342" exists ...
	I1025 10:57:00.075507  454751 cli_runner.go:164] Run: docker container inspect embed-certs-348342 --format={{.State.Status}}
	I1025 10:57:00.090069  454751 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 10:57:00.138581  454751 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:57:00.141708  454751 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1025 10:56:55.817185  451806 pod_ready.go:104] pod "coredns-66bc5c9577-w9r8g" is not "Ready", error: <nil>
	W1025 10:56:57.817383  451806 pod_ready.go:104] pod "coredns-66bc5c9577-w9r8g" is not "Ready", error: <nil>
	W1025 10:56:59.817452  451806 pod_ready.go:104] pod "coredns-66bc5c9577-w9r8g" is not "Ready", error: <nil>
	I1025 10:57:00.141714  454751 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:57:00.141813  454751 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:57:00.141900  454751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:57:00.146186  454751 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:57:00.146221  454751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:57:00.146303  454751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:57:00.185590  454751 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:57:00.185622  454751 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:57:00.185708  454751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:57:00.218534  454751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/embed-certs-348342/id_rsa Username:docker}
	I1025 10:57:00.235655  454751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/embed-certs-348342/id_rsa Username:docker}
	I1025 10:57:00.242282  454751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/embed-certs-348342/id_rsa Username:docker}
	I1025 10:57:00.480903  454751 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:57:00.480927  454751 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:57:00.503577  454751 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:57:00.545768  454751 node_ready.go:35] waiting up to 6m0s for node "embed-certs-348342" to be "Ready" ...
	I1025 10:57:00.551290  454751 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:57:00.551369  454751 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:57:00.587428  454751 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:57:00.587503  454751 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:57:00.590610  454751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:57:00.623813  454751 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:57:00.623888  454751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:57:00.646859  454751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:57:00.686461  454751 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:57:00.686540  454751 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:57:00.756729  454751 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:57:00.756803  454751 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:57:00.811016  454751 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:57:00.811091  454751 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:57:00.874615  454751 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:57:00.874701  454751 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:57:00.889112  454751 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:57:00.889195  454751 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:57:00.907205  454751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:57:02.317774  451806 pod_ready.go:94] pod "coredns-66bc5c9577-w9r8g" is "Ready"
	I1025 10:57:02.317858  451806 pod_ready.go:86] duration metric: took 36.50663028s for pod "coredns-66bc5c9577-w9r8g" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:02.325740  451806 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-223394" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:02.331186  451806 pod_ready.go:94] pod "etcd-default-k8s-diff-port-223394" is "Ready"
	I1025 10:57:02.331222  451806 pod_ready.go:86] duration metric: took 5.455434ms for pod "etcd-default-k8s-diff-port-223394" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:02.339430  451806 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-223394" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:02.345286  451806 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-223394" is "Ready"
	I1025 10:57:02.345323  451806 pod_ready.go:86] duration metric: took 5.864176ms for pod "kube-apiserver-default-k8s-diff-port-223394" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:02.348347  451806 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-223394" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:02.515952  451806 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-223394" is "Ready"
	I1025 10:57:02.515999  451806 pod_ready.go:86] duration metric: took 167.624099ms for pod "kube-controller-manager-default-k8s-diff-port-223394" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:02.715432  451806 pod_ready.go:83] waiting for pod "kube-proxy-zpq57" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:03.115303  451806 pod_ready.go:94] pod "kube-proxy-zpq57" is "Ready"
	I1025 10:57:03.115333  451806 pod_ready.go:86] duration metric: took 399.872536ms for pod "kube-proxy-zpq57" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:03.315540  451806 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-223394" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:03.715305  451806 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-223394" is "Ready"
	I1025 10:57:03.715337  451806 pod_ready.go:86] duration metric: took 399.766976ms for pod "kube-scheduler-default-k8s-diff-port-223394" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:03.715350  451806 pod_ready.go:40] duration metric: took 37.908136271s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:57:03.828803  451806 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:57:03.832103  451806 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-223394" cluster and "default" namespace by default
	I1025 10:57:05.183951  454751 node_ready.go:49] node "embed-certs-348342" is "Ready"
	I1025 10:57:05.183981  454751 node_ready.go:38] duration metric: took 4.638120054s for node "embed-certs-348342" to be "Ready" ...
	I1025 10:57:05.183995  454751 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:57:05.184058  454751 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:57:06.554541  454751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.963853417s)
	I1025 10:57:06.554610  454751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.907681741s)
	I1025 10:57:06.679020  454751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.771723225s)
	I1025 10:57:06.679253  454751 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.495181757s)
	I1025 10:57:06.679290  454751 api_server.go:72] duration metric: took 6.660744922s to wait for apiserver process to appear ...
	I1025 10:57:06.679323  454751 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:57:06.679359  454751 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:57:06.683039  454751 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-348342 addons enable metrics-server
	
	I1025 10:57:06.688193  454751 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1025 10:57:06.688878  454751 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1025 10:57:06.690288  454751 api_server.go:141] control plane version: v1.34.1
	I1025 10:57:06.690311  454751 api_server.go:131] duration metric: took 10.965243ms to wait for apiserver health ...
	I1025 10:57:06.690321  454751 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:57:06.693050  454751 addons.go:514] duration metric: took 6.674023614s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1025 10:57:06.694452  454751 system_pods.go:59] 8 kube-system pods found
	I1025 10:57:06.694503  454751 system_pods.go:61] "coredns-66bc5c9577-sqrrf" [15846173-f49c-4d50-af52-3b1b371fde43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:57:06.694514  454751 system_pods.go:61] "etcd-embed-certs-348342" [65a59ffa-4cba-4290-8c46-07e62bcf564b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:57:06.694523  454751 system_pods.go:61] "kindnet-q5mzm" [4caa08ee-f6f3-442c-ad08-2be933f2869f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 10:57:06.694529  454751 system_pods.go:61] "kube-apiserver-embed-certs-348342" [b67dbed8-5ebd-4a9f-804c-ed82033d0e19] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:57:06.694536  454751 system_pods.go:61] "kube-controller-manager-embed-certs-348342" [9ce2257d-b332-492a-8553-f7736a99b5db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:57:06.694547  454751 system_pods.go:61] "kube-proxy-j9ngr" [946e15f1-043f-4f6e-a995-79bb16033e3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 10:57:06.694554  454751 system_pods.go:61] "kube-scheduler-embed-certs-348342" [4e9a5441-6278-4eca-82d2-606bda24b02d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:57:06.694560  454751 system_pods.go:61] "storage-provisioner" [9a91278c-945c-48bc-be8b-39e026d485b4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:57:06.694565  454751 system_pods.go:74] duration metric: took 4.239439ms to wait for pod list to return data ...
	I1025 10:57:06.694572  454751 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:57:06.701044  454751 default_sa.go:45] found service account: "default"
	I1025 10:57:06.701067  454751 default_sa.go:55] duration metric: took 6.488904ms for default service account to be created ...
	I1025 10:57:06.701076  454751 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:57:06.709385  454751 system_pods.go:86] 8 kube-system pods found
	I1025 10:57:06.709476  454751 system_pods.go:89] "coredns-66bc5c9577-sqrrf" [15846173-f49c-4d50-af52-3b1b371fde43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:57:06.709503  454751 system_pods.go:89] "etcd-embed-certs-348342" [65a59ffa-4cba-4290-8c46-07e62bcf564b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:57:06.709547  454751 system_pods.go:89] "kindnet-q5mzm" [4caa08ee-f6f3-442c-ad08-2be933f2869f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 10:57:06.709575  454751 system_pods.go:89] "kube-apiserver-embed-certs-348342" [b67dbed8-5ebd-4a9f-804c-ed82033d0e19] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:57:06.709600  454751 system_pods.go:89] "kube-controller-manager-embed-certs-348342" [9ce2257d-b332-492a-8553-f7736a99b5db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:57:06.709634  454751 system_pods.go:89] "kube-proxy-j9ngr" [946e15f1-043f-4f6e-a995-79bb16033e3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 10:57:06.709660  454751 system_pods.go:89] "kube-scheduler-embed-certs-348342" [4e9a5441-6278-4eca-82d2-606bda24b02d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:57:06.709690  454751 system_pods.go:89] "storage-provisioner" [9a91278c-945c-48bc-be8b-39e026d485b4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:57:06.709726  454751 system_pods.go:126] duration metric: took 8.643311ms to wait for k8s-apps to be running ...
	I1025 10:57:06.709753  454751 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:57:06.709841  454751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:57:06.730734  454751 system_svc.go:56] duration metric: took 20.971923ms WaitForService to wait for kubelet
	I1025 10:57:06.730813  454751 kubeadm.go:586] duration metric: took 6.712266693s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:57:06.730848  454751 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:57:06.735835  454751 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:57:06.735919  454751 node_conditions.go:123] node cpu capacity is 2
	I1025 10:57:06.735948  454751 node_conditions.go:105] duration metric: took 5.080465ms to run NodePressure ...
	I1025 10:57:06.735975  454751 start.go:241] waiting for startup goroutines ...
	I1025 10:57:06.736013  454751 start.go:246] waiting for cluster config update ...
	I1025 10:57:06.736041  454751 start.go:255] writing updated cluster config ...
	I1025 10:57:06.736361  454751 ssh_runner.go:195] Run: rm -f paused
	I1025 10:57:06.741191  454751 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:57:06.795560  454751 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sqrrf" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:57:08.824260  454751 pod_ready.go:104] pod "coredns-66bc5c9577-sqrrf" is not "Ready", error: <nil>
	W1025 10:57:11.304311  454751 pod_ready.go:104] pod "coredns-66bc5c9577-sqrrf" is not "Ready", error: <nil>
	W1025 10:57:13.802242  454751 pod_ready.go:104] pod "coredns-66bc5c9577-sqrrf" is not "Ready", error: <nil>
	W1025 10:57:16.302490  454751 pod_ready.go:104] pod "coredns-66bc5c9577-sqrrf" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 25 10:57:05 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:05.53490707Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:57:05 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:05.538031866Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:57:05 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:05.538171666Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:57:05 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:05.538241993Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:57:05 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:05.542070064Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:57:05 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:05.542237057Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:57:05 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:05.542308622Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:57:05 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:05.54560521Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:57:05 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:05.545769167Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:57:05 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:05.545844663Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:57:05 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:05.549393011Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:57:05 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:05.549554752Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:57:11 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:11.758839221Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a3304968-bf8e-4aa2-a4a5-f70c1e4061dc name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:57:11 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:11.763201517Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=84600204-6ac9-4583-81f5-cdd194ae7d0b name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:57:11 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:11.76629697Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lgh8j/dashboard-metrics-scraper" id=3e611786-69e1-4d4a-a35e-fc96be615df7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:57:11 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:11.766452656Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:57:11 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:11.779743197Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:57:11 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:11.789379542Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:57:11 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:11.827886008Z" level=info msg="Created container 880c32c6e4d36298f25604e265d4a1cd58e474073d844950f788323abad6cb80: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lgh8j/dashboard-metrics-scraper" id=3e611786-69e1-4d4a-a35e-fc96be615df7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:57:11 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:11.838340043Z" level=info msg="Starting container: 880c32c6e4d36298f25604e265d4a1cd58e474073d844950f788323abad6cb80" id=1cb80a04-a548-45b5-9471-df7bb7124eee name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:57:11 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:11.840195978Z" level=info msg="Started container" PID=1735 containerID=880c32c6e4d36298f25604e265d4a1cd58e474073d844950f788323abad6cb80 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lgh8j/dashboard-metrics-scraper id=1cb80a04-a548-45b5-9471-df7bb7124eee name=/runtime.v1.RuntimeService/StartContainer sandboxID=76dcaa57282a0821af4ea153a07cd05320da6dce1c061db52f01ad6ede203f5e
	Oct 25 10:57:11 default-k8s-diff-port-223394 conmon[1733]: conmon 880c32c6e4d36298f256 <ninfo>: container 1735 exited with status 1
	Oct 25 10:57:12 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:12.02657254Z" level=info msg="Removing container: 1d28ae23f2882ed4b53dbdc20f695a2eb7e4f8e7c0a8964f2612b92a7d5d2b34" id=5adfa69b-de70-4584-a164-461bb0ceadbf name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:57:12 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:12.04048323Z" level=info msg="Error loading conmon cgroup of container 1d28ae23f2882ed4b53dbdc20f695a2eb7e4f8e7c0a8964f2612b92a7d5d2b34: cgroup deleted" id=5adfa69b-de70-4584-a164-461bb0ceadbf name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:57:12 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:12.049692421Z" level=info msg="Removed container 1d28ae23f2882ed4b53dbdc20f695a2eb7e4f8e7c0a8964f2612b92a7d5d2b34: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lgh8j/dashboard-metrics-scraper" id=5adfa69b-de70-4584-a164-461bb0ceadbf name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	880c32c6e4d36       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago        Exited              dashboard-metrics-scraper   3                   76dcaa57282a0       dashboard-metrics-scraper-6ffb444bf9-lgh8j             kubernetes-dashboard
	9ae681564efc7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   3907d5fe3c85e       storage-provisioner                                    kube-system
	006247d5cea81       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago       Running             kubernetes-dashboard        0                   8572972dd38bc       kubernetes-dashboard-855c9754f9-wmhcq                  kubernetes-dashboard
	87d8695047930       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   4c3ee99bab7c6       coredns-66bc5c9577-w9r8g                               kube-system
	07b8b77e031a1       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   fca6e167999ac       busybox                                                default
	8fe3eb331d0de       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   7e763f2e49474       kube-proxy-zpq57                                       kube-system
	07de50a4075df       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   3907d5fe3c85e       storage-provisioner                                    kube-system
	f62f9dca6b34e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   2bee7666853aa       kindnet-tclvn                                          kube-system
	eb487ee4e10f6       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   e7dc1a723bbdb       kube-scheduler-default-k8s-diff-port-223394            kube-system
	6c982a01974be       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   f86e2e440e3e4       etcd-default-k8s-diff-port-223394                      kube-system
	9779636f70f0c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   6f1887a3f8b06       kube-controller-manager-default-k8s-diff-port-223394   kube-system
	c82e104c40d0d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   5ad8bea0d798a       kube-apiserver-default-k8s-diff-port-223394            kube-system
	
	
	==> coredns [87d869504793063a19919eff743283ee2b55be58b9d8352930f36eb27a405677] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58091 - 23361 "HINFO IN 1723357095776880230.8227621694220695296. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018957768s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-223394
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-223394
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=default-k8s-diff-port-223394
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_54_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:54:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-223394
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:57:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:57:05 +0000   Sat, 25 Oct 2025 10:54:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:57:05 +0000   Sat, 25 Oct 2025 10:54:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:57:05 +0000   Sat, 25 Oct 2025 10:54:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:57:05 +0000   Sat, 25 Oct 2025 10:55:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-223394
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                de5cd403-6ec9-42cd-9429-85d79e1a8304
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-w9r8g                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m17s
	  kube-system                 etcd-default-k8s-diff-port-223394                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m22s
	  kube-system                 kindnet-tclvn                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-default-k8s-diff-port-223394             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-223394    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-proxy-zpq57                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-default-k8s-diff-port-223394             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-lgh8j              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-wmhcq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m16s                  kube-proxy       
	  Normal   Starting                 54s                    kube-proxy       
	  Normal   Starting                 2m30s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m30s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m30s (x9 over 2m30s)  kubelet          Node default-k8s-diff-port-223394 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m30s (x8 over 2m30s)  kubelet          Node default-k8s-diff-port-223394 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m30s (x7 over 2m30s)  kubelet          Node default-k8s-diff-port-223394 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m23s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m22s                  kubelet          Node default-k8s-diff-port-223394 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m22s                  kubelet          Node default-k8s-diff-port-223394 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m22s                  kubelet          Node default-k8s-diff-port-223394 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m19s                  node-controller  Node default-k8s-diff-port-223394 event: Registered Node default-k8s-diff-port-223394 in Controller
	  Normal   NodeReady                96s                    kubelet          Node default-k8s-diff-port-223394 status is now: NodeReady
	  Normal   Starting                 63s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node default-k8s-diff-port-223394 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node default-k8s-diff-port-223394 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node default-k8s-diff-port-223394 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                    node-controller  Node default-k8s-diff-port-223394 event: Registered Node default-k8s-diff-port-223394 in Controller
	
	
	==> dmesg <==
	[Oct25 10:33] overlayfs: idmapped layers are currently not supported
	[Oct25 10:34] overlayfs: idmapped layers are currently not supported
	[Oct25 10:36] overlayfs: idmapped layers are currently not supported
	[ +23.146409] overlayfs: idmapped layers are currently not supported
	[ +12.090113] overlayfs: idmapped layers are currently not supported
	[Oct25 10:37] overlayfs: idmapped layers are currently not supported
	[ +24.992751] overlayfs: idmapped layers are currently not supported
	[Oct25 10:38] overlayfs: idmapped layers are currently not supported
	[Oct25 10:39] overlayfs: idmapped layers are currently not supported
	[Oct25 10:40] overlayfs: idmapped layers are currently not supported
	[Oct25 10:41] overlayfs: idmapped layers are currently not supported
	[Oct25 10:43] overlayfs: idmapped layers are currently not supported
	[Oct25 10:45] overlayfs: idmapped layers are currently not supported
	[ +32.163680] overlayfs: idmapped layers are currently not supported
	[Oct25 10:47] overlayfs: idmapped layers are currently not supported
	[Oct25 10:49] overlayfs: idmapped layers are currently not supported
	[Oct25 10:50] overlayfs: idmapped layers are currently not supported
	[Oct25 10:51] overlayfs: idmapped layers are currently not supported
	[  +5.236078] overlayfs: idmapped layers are currently not supported
	[Oct25 10:52] overlayfs: idmapped layers are currently not supported
	[Oct25 10:53] overlayfs: idmapped layers are currently not supported
	[Oct25 10:54] overlayfs: idmapped layers are currently not supported
	[Oct25 10:55] overlayfs: idmapped layers are currently not supported
	[Oct25 10:56] overlayfs: idmapped layers are currently not supported
	[ +41.501413] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6c982a01974bebe010fd07605d0e7e6f34d2e021c6ffb16dedef170e47c26875] <==
	{"level":"warn","ts":"2025-10-25T10:56:21.434569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.460529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.490721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.526661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.538491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.580254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.610239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.638091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.655162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.682231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.706228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.746424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.760050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.827164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.853497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.893341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.921550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.949097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.981908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:22.042831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:22.096078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:22.146358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:22.206150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:22.231377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:22.320780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55400","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:57:20 up  2:39,  0 user,  load average: 4.24, 3.55, 2.96
	Linux default-k8s-diff-port-223394 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f62f9dca6b34ebcc637ed54376e046a6d45148e3c61defb45110e0d36387c285] <==
	I1025 10:56:25.324833       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:56:25.325020       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:56:25.325149       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:56:25.325164       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:56:25.325174       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:56:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:56:25.614160       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:56:25.614259       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:56:25.614324       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:56:25.615454       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:56:55.530367       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 10:56:55.614978       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 10:56:55.616269       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 10:56:55.616427       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1025 10:56:56.715172       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:56:56.715215       1 metrics.go:72] Registering metrics
	I1025 10:56:56.715276       1 controller.go:711] "Syncing nftables rules"
	I1025 10:57:05.530101       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:57:05.530219       1 main.go:301] handling current node
	I1025 10:57:15.531751       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:57:15.531853       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c82e104c40d0dc2552e92b2571bdc6ca33dc11c21c904ce5b807e393939d0fe1] <==
	I1025 10:56:23.874468       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:56:23.879667       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 10:56:23.879689       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 10:56:23.879705       1 policy_source.go:240] refreshing policies
	I1025 10:56:23.881456       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:56:23.881483       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 10:56:23.881489       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 10:56:23.892846       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:56:23.936740       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 10:56:23.936898       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 10:56:23.937526       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 10:56:23.938033       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:56:23.961319       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 10:56:23.988209       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 10:56:24.462087       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:56:24.568187       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:56:24.623676       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:56:24.682328       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:56:24.700043       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:56:24.732657       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:56:25.014828       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.219.64"}
	I1025 10:56:25.059659       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.179.240"}
	I1025 10:56:27.269300       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:56:27.712087       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:56:27.835174       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9779636f70f0c278cba11f390d72d18ecf6492c685c187c54ca454f436e08653] <==
	I1025 10:56:27.254404       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:56:27.255350       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 10:56:27.255448       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:56:27.257943       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:56:27.258077       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:56:27.258466       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 10:56:27.258739       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 10:56:27.258857       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 10:56:27.260411       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 10:56:27.269563       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 10:56:27.269889       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 10:56:27.270024       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 10:56:27.273197       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 10:56:27.275296       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 10:56:27.275412       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 10:56:27.279048       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:56:27.282777       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 10:56:27.282827       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:56:27.288818       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 10:56:27.295261       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 10:56:27.295479       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 10:56:27.295590       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-223394"
	I1025 10:56:27.295686       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 10:56:27.296809       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:56:27.307809       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-proxy [8fe3eb331d0de316a3705b7640f3dfebe0d4ca0136afef7d77f67dcd835bce76] <==
	I1025 10:56:25.483984       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:56:25.600252       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:56:25.700483       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:56:25.700518       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 10:56:25.700607       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:56:25.720511       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:56:25.720567       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:56:25.724927       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:56:25.725290       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:56:25.725311       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:56:25.726197       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:56:25.726214       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:56:25.726500       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:56:25.726558       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:56:25.729024       1 config.go:309] "Starting node config controller"
	I1025 10:56:25.729089       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:56:25.729119       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:56:25.735939       1 config.go:200] "Starting service config controller"
	I1025 10:56:25.736036       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:56:25.736069       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:56:25.827185       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:56:25.827302       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [eb487ee4e10f68f40f25f7e75f8231d3678bb16df616131e9dd7d0bbf8f2f3ed] <==
	I1025 10:56:21.036809       1 serving.go:386] Generated self-signed cert in-memory
	W1025 10:56:23.799548       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 10:56:23.799668       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 10:56:23.799703       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 10:56:23.799755       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 10:56:23.914137       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:56:23.914166       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:56:23.928134       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:56:23.930492       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:56:23.930516       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:56:23.930536       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:56:24.042159       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:56:28 default-k8s-diff-port-223394 kubelet[780]: W1025 10:56:28.162187     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fdfe0713435ec0c9d963f0d524447bf680a521a643dff88e212a946e59a268a7/crio-8572972dd38bc93e4a40d6013ec132f234813569d85f5ed14d36400f8268d970 WatchSource:0}: Error finding container 8572972dd38bc93e4a40d6013ec132f234813569d85f5ed14d36400f8268d970: Status 404 returned error can't find the container with id 8572972dd38bc93e4a40d6013ec132f234813569d85f5ed14d36400f8268d970
	Oct 25 10:56:32 default-k8s-diff-port-223394 kubelet[780]: I1025 10:56:32.140953     780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 25 10:56:32 default-k8s-diff-port-223394 kubelet[780]: I1025 10:56:32.902542     780 scope.go:117] "RemoveContainer" containerID="5d714b9bbb9b6e23291d95edc0df57b5efb6e6d78ddee0ad1fb001cf714231e6"
	Oct 25 10:56:33 default-k8s-diff-port-223394 kubelet[780]: I1025 10:56:33.906679     780 scope.go:117] "RemoveContainer" containerID="5d714b9bbb9b6e23291d95edc0df57b5efb6e6d78ddee0ad1fb001cf714231e6"
	Oct 25 10:56:33 default-k8s-diff-port-223394 kubelet[780]: I1025 10:56:33.906958     780 scope.go:117] "RemoveContainer" containerID="2e65d01c457c37cb94eb964d5f2e47122236420bc787ce32d0d4be257f0a6ae5"
	Oct 25 10:56:33 default-k8s-diff-port-223394 kubelet[780]: E1025 10:56:33.907106     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lgh8j_kubernetes-dashboard(0f8d0c94-9b68-4974-bd20-76a4605246dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lgh8j" podUID="0f8d0c94-9b68-4974-bd20-76a4605246dd"
	Oct 25 10:56:34 default-k8s-diff-port-223394 kubelet[780]: I1025 10:56:34.909922     780 scope.go:117] "RemoveContainer" containerID="2e65d01c457c37cb94eb964d5f2e47122236420bc787ce32d0d4be257f0a6ae5"
	Oct 25 10:56:34 default-k8s-diff-port-223394 kubelet[780]: E1025 10:56:34.910578     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lgh8j_kubernetes-dashboard(0f8d0c94-9b68-4974-bd20-76a4605246dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lgh8j" podUID="0f8d0c94-9b68-4974-bd20-76a4605246dd"
	Oct 25 10:56:38 default-k8s-diff-port-223394 kubelet[780]: I1025 10:56:38.110100     780 scope.go:117] "RemoveContainer" containerID="2e65d01c457c37cb94eb964d5f2e47122236420bc787ce32d0d4be257f0a6ae5"
	Oct 25 10:56:38 default-k8s-diff-port-223394 kubelet[780]: E1025 10:56:38.110311     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lgh8j_kubernetes-dashboard(0f8d0c94-9b68-4974-bd20-76a4605246dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lgh8j" podUID="0f8d0c94-9b68-4974-bd20-76a4605246dd"
	Oct 25 10:56:48 default-k8s-diff-port-223394 kubelet[780]: I1025 10:56:48.757216     780 scope.go:117] "RemoveContainer" containerID="2e65d01c457c37cb94eb964d5f2e47122236420bc787ce32d0d4be257f0a6ae5"
	Oct 25 10:56:48 default-k8s-diff-port-223394 kubelet[780]: I1025 10:56:48.948059     780 scope.go:117] "RemoveContainer" containerID="2e65d01c457c37cb94eb964d5f2e47122236420bc787ce32d0d4be257f0a6ae5"
	Oct 25 10:56:48 default-k8s-diff-port-223394 kubelet[780]: I1025 10:56:48.948281     780 scope.go:117] "RemoveContainer" containerID="1d28ae23f2882ed4b53dbdc20f695a2eb7e4f8e7c0a8964f2612b92a7d5d2b34"
	Oct 25 10:56:48 default-k8s-diff-port-223394 kubelet[780]: E1025 10:56:48.948475     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lgh8j_kubernetes-dashboard(0f8d0c94-9b68-4974-bd20-76a4605246dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lgh8j" podUID="0f8d0c94-9b68-4974-bd20-76a4605246dd"
	Oct 25 10:56:48 default-k8s-diff-port-223394 kubelet[780]: I1025 10:56:48.969922     780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wmhcq" podStartSLOduration=11.82807827 podStartE2EDuration="21.969904735s" podCreationTimestamp="2025-10-25 10:56:27 +0000 UTC" firstStartedPulling="2025-10-25 10:56:28.165053013 +0000 UTC m=+10.622568296" lastFinishedPulling="2025-10-25 10:56:38.306879486 +0000 UTC m=+20.764394761" observedRunningTime="2025-10-25 10:56:38.941735969 +0000 UTC m=+21.399251252" watchObservedRunningTime="2025-10-25 10:56:48.969904735 +0000 UTC m=+31.427420018"
	Oct 25 10:56:55 default-k8s-diff-port-223394 kubelet[780]: I1025 10:56:55.968540     780 scope.go:117] "RemoveContainer" containerID="07de50a4075df0e36d34be8ef0e96165ff750994856c24e4902fe73fc3fcb1fb"
	Oct 25 10:56:58 default-k8s-diff-port-223394 kubelet[780]: I1025 10:56:58.109866     780 scope.go:117] "RemoveContainer" containerID="1d28ae23f2882ed4b53dbdc20f695a2eb7e4f8e7c0a8964f2612b92a7d5d2b34"
	Oct 25 10:56:58 default-k8s-diff-port-223394 kubelet[780]: E1025 10:56:58.110132     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lgh8j_kubernetes-dashboard(0f8d0c94-9b68-4974-bd20-76a4605246dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lgh8j" podUID="0f8d0c94-9b68-4974-bd20-76a4605246dd"
	Oct 25 10:57:11 default-k8s-diff-port-223394 kubelet[780]: I1025 10:57:11.757382     780 scope.go:117] "RemoveContainer" containerID="1d28ae23f2882ed4b53dbdc20f695a2eb7e4f8e7c0a8964f2612b92a7d5d2b34"
	Oct 25 10:57:12 default-k8s-diff-port-223394 kubelet[780]: I1025 10:57:12.018051     780 scope.go:117] "RemoveContainer" containerID="1d28ae23f2882ed4b53dbdc20f695a2eb7e4f8e7c0a8964f2612b92a7d5d2b34"
	Oct 25 10:57:12 default-k8s-diff-port-223394 kubelet[780]: I1025 10:57:12.018464     780 scope.go:117] "RemoveContainer" containerID="880c32c6e4d36298f25604e265d4a1cd58e474073d844950f788323abad6cb80"
	Oct 25 10:57:12 default-k8s-diff-port-223394 kubelet[780]: E1025 10:57:12.018666     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lgh8j_kubernetes-dashboard(0f8d0c94-9b68-4974-bd20-76a4605246dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lgh8j" podUID="0f8d0c94-9b68-4974-bd20-76a4605246dd"
	Oct 25 10:57:17 default-k8s-diff-port-223394 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:57:17 default-k8s-diff-port-223394 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:57:17 default-k8s-diff-port-223394 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [006247d5cea81537221f79175bf519982dcf6cbc03bf6367c41f687ef833cf21] <==
	2025/10/25 10:56:38 Using namespace: kubernetes-dashboard
	2025/10/25 10:56:38 Using in-cluster config to connect to apiserver
	2025/10/25 10:56:38 Using secret token for csrf signing
	2025/10/25 10:56:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 10:56:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 10:56:38 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 10:56:38 Generating JWE encryption key
	2025/10/25 10:56:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 10:56:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 10:56:39 Initializing JWE encryption key from synchronized object
	2025/10/25 10:56:39 Creating in-cluster Sidecar client
	2025/10/25 10:56:39 Serving insecurely on HTTP port: 9090
	2025/10/25 10:56:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:57:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:56:38 Starting overwatch
	
	
	==> storage-provisioner [07de50a4075df0e36d34be8ef0e96165ff750994856c24e4902fe73fc3fcb1fb] <==
	I1025 10:56:25.347878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 10:56:55.350260       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9ae681564efc76dca39400bd2a4f79850bccb015cd2dac17da552bc3b801e930] <==
	I1025 10:56:56.071423       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:56:56.083990       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:56:56.084275       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 10:56:56.089269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:56:59.544905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:03.814426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:07.413239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:10.468193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:13.491915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:13.498649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:57:13.498808       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:57:13.501248       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-223394_adc12a52-6957-4610-be4e-05dda88c2ada!
	I1025 10:57:13.503047       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"349ccd11-8226-4feb-9ee3-b35b622cb7d9", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-223394_adc12a52-6957-4610-be4e-05dda88c2ada became leader
	W1025 10:57:13.503392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:13.522295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:57:13.602356       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-223394_adc12a52-6957-4610-be4e-05dda88c2ada!
	W1025 10:57:15.527656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:15.534331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:17.538071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:17.547324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:19.551750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:19.557676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-223394 -n default-k8s-diff-port-223394
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-223394 -n default-k8s-diff-port-223394: exit status 2 (383.212607ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-223394 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-223394
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-223394:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fdfe0713435ec0c9d963f0d524447bf680a521a643dff88e212a946e59a268a7",
	        "Created": "2025-10-25T10:54:33.801036185Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 451934,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:56:10.658387728Z",
	            "FinishedAt": "2025-10-25T10:56:09.79862304Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/fdfe0713435ec0c9d963f0d524447bf680a521a643dff88e212a946e59a268a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fdfe0713435ec0c9d963f0d524447bf680a521a643dff88e212a946e59a268a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/fdfe0713435ec0c9d963f0d524447bf680a521a643dff88e212a946e59a268a7/hosts",
	        "LogPath": "/var/lib/docker/containers/fdfe0713435ec0c9d963f0d524447bf680a521a643dff88e212a946e59a268a7/fdfe0713435ec0c9d963f0d524447bf680a521a643dff88e212a946e59a268a7-json.log",
	        "Name": "/default-k8s-diff-port-223394",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-223394:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-223394",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fdfe0713435ec0c9d963f0d524447bf680a521a643dff88e212a946e59a268a7",
	                "LowerDir": "/var/lib/docker/overlay2/16c3b01afa0b4ec1bbf75b73359cd04d1fc7ed7d6a6cc96f08daeb4bea593cde-init/diff:/var/lib/docker/overlay2/a746402fafa86965bd9394d930bb96ec16982037f9f5e7f4f1de6d09576ff851/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16c3b01afa0b4ec1bbf75b73359cd04d1fc7ed7d6a6cc96f08daeb4bea593cde/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16c3b01afa0b4ec1bbf75b73359cd04d1fc7ed7d6a6cc96f08daeb4bea593cde/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16c3b01afa0b4ec1bbf75b73359cd04d1fc7ed7d6a6cc96f08daeb4bea593cde/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-223394",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-223394/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-223394",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-223394",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-223394",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9d3a8b9d3048d7a09346a8eed98172f6a3da0497ffaf764f484013c0046e47f2",
	            "SandboxKey": "/var/run/docker/netns/9d3a8b9d3048",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-223394": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:af:d1:33:2c:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f8140ea88edc3e6f9170c2a8375ca78b30531642cc0a79f4070e57085e0519f4",
	                    "EndpointID": "a5d34eab1933401a98868a8e62cc481865c3d5f086d92515332cca2ae98779cc",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-223394",
	                        "fdfe0713435e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-223394 -n default-k8s-diff-port-223394
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-223394 -n default-k8s-diff-port-223394: exit status 2 (362.549855ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-223394 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-223394 logs -n 25: (1.351244342s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-771620 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-771620          │ jenkins │ v1.37.0 │ 25 Oct 25 10:51 UTC │ 25 Oct 25 10:51 UTC │
	│ delete  │ -p cert-options-771620                                                                                                                                                                                                                        │ cert-options-771620          │ jenkins │ v1.37.0 │ 25 Oct 25 10:51 UTC │ 25 Oct 25 10:51 UTC │
	│ start   │ -p old-k8s-version-031983 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:51 UTC │ 25 Oct 25 10:52 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-031983 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:52 UTC │                     │
	│ stop    │ -p old-k8s-version-031983 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:53 UTC │ 25 Oct 25 10:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-031983 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:53 UTC │ 25 Oct 25 10:53 UTC │
	│ start   │ -p old-k8s-version-031983 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:53 UTC │ 25 Oct 25 10:54 UTC │
	│ image   │ old-k8s-version-031983 image list --format=json                                                                                                                                                                                               │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:54 UTC │
	│ pause   │ -p old-k8s-version-031983 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │                     │
	│ delete  │ -p old-k8s-version-031983                                                                                                                                                                                                                     │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:54 UTC │
	│ delete  │ -p old-k8s-version-031983                                                                                                                                                                                                                     │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:54 UTC │
	│ start   │ -p default-k8s-diff-port-223394 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:55 UTC │
	│ start   │ -p cert-expiration-736062 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-736062       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:55 UTC │
	│ delete  │ -p cert-expiration-736062                                                                                                                                                                                                                     │ cert-expiration-736062       │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │ 25 Oct 25 10:55 UTC │
	│ start   │ -p embed-certs-348342 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │ 25 Oct 25 10:56 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-223394 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-223394 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │ 25 Oct 25 10:56 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-223394 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:56 UTC │
	│ start   │ -p default-k8s-diff-port-223394 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-348342 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │                     │
	│ stop    │ -p embed-certs-348342 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-348342 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:56 UTC │
	│ start   │ -p embed-certs-348342 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │                     │
	│ image   │ default-k8s-diff-port-223394 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ pause   │ -p default-k8s-diff-port-223394 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:56:51
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:56:51.596284  454751 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:56:51.596406  454751 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:56:51.596418  454751 out.go:374] Setting ErrFile to fd 2...
	I1025 10:56:51.596424  454751 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:56:51.596739  454751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:56:51.597129  454751 out.go:368] Setting JSON to false
	I1025 10:56:51.598263  454751 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9563,"bootTime":1761380249,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 10:56:51.598338  454751 start.go:141] virtualization:  
	I1025 10:56:51.603483  454751 out.go:179] * [embed-certs-348342] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:56:51.606747  454751 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:56:51.606790  454751 notify.go:220] Checking for updates...
	I1025 10:56:51.613384  454751 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:56:51.616591  454751 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:56:51.619931  454751 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 10:56:51.623157  454751 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:56:51.626214  454751 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:56:51.629791  454751 config.go:182] Loaded profile config "embed-certs-348342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:56:51.630456  454751 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:56:51.663255  454751 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:56:51.663447  454751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:56:51.723736  454751 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:56:51.713833036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:56:51.723844  454751 docker.go:318] overlay module found
	I1025 10:56:51.726998  454751 out.go:179] * Using the docker driver based on existing profile
	I1025 10:56:51.729815  454751 start.go:305] selected driver: docker
	I1025 10:56:51.729835  454751 start.go:925] validating driver "docker" against &{Name:embed-certs-348342 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-348342 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:56:51.729949  454751 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:56:51.730811  454751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:56:51.787063  454751 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:56:51.777393267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:56:51.787413  454751 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:56:51.787445  454751 cni.go:84] Creating CNI manager for ""
	I1025 10:56:51.787506  454751 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:56:51.787547  454751 start.go:349] cluster config:
	{Name:embed-certs-348342 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-348342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:56:51.792524  454751 out.go:179] * Starting "embed-certs-348342" primary control-plane node in "embed-certs-348342" cluster
	I1025 10:56:51.795363  454751 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:56:51.798268  454751 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:56:51.801046  454751 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:56:51.801116  454751 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 10:56:51.801130  454751 cache.go:58] Caching tarball of preloaded images
	I1025 10:56:51.801137  454751 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:56:51.801295  454751 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:56:51.801323  454751 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:56:51.801498  454751 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/config.json ...
	I1025 10:56:51.823203  454751 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:56:51.823228  454751 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:56:51.823247  454751 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:56:51.823279  454751 start.go:360] acquireMachinesLock for embed-certs-348342: {Name:mk6a33c3a0d7242e8af53b027ee4f0bef4d472df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:56:51.823344  454751 start.go:364] duration metric: took 38.769µs to acquireMachinesLock for "embed-certs-348342"
	I1025 10:56:51.823368  454751 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:56:51.823376  454751 fix.go:54] fixHost starting: 
	I1025 10:56:51.823644  454751 cli_runner.go:164] Run: docker container inspect embed-certs-348342 --format={{.State.Status}}
	I1025 10:56:51.840608  454751 fix.go:112] recreateIfNeeded on embed-certs-348342: state=Stopped err=<nil>
	W1025 10:56:51.840660  454751 fix.go:138] unexpected machine state, will restart: <nil>
	W1025 10:56:51.318451  451806 pod_ready.go:104] pod "coredns-66bc5c9577-w9r8g" is not "Ready", error: <nil>
	W1025 10:56:53.816466  451806 pod_ready.go:104] pod "coredns-66bc5c9577-w9r8g" is not "Ready", error: <nil>
	I1025 10:56:51.843806  454751 out.go:252] * Restarting existing docker container for "embed-certs-348342" ...
	I1025 10:56:51.843894  454751 cli_runner.go:164] Run: docker start embed-certs-348342
	I1025 10:56:52.124823  454751 cli_runner.go:164] Run: docker container inspect embed-certs-348342 --format={{.State.Status}}
	I1025 10:56:52.147848  454751 kic.go:430] container "embed-certs-348342" state is running.
	I1025 10:56:52.150639  454751 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-348342
	I1025 10:56:52.180436  454751 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/config.json ...
	I1025 10:56:52.180700  454751 machine.go:93] provisionDockerMachine start ...
	I1025 10:56:52.180774  454751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:56:52.200616  454751 main.go:141] libmachine: Using SSH client type: native
	I1025 10:56:52.200946  454751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1025 10:56:52.200958  454751 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:56:52.201581  454751 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 10:56:55.356290  454751 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-348342
	
	I1025 10:56:55.356313  454751 ubuntu.go:182] provisioning hostname "embed-certs-348342"
	I1025 10:56:55.356405  454751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:56:55.376664  454751 main.go:141] libmachine: Using SSH client type: native
	I1025 10:56:55.376975  454751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1025 10:56:55.376992  454751 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-348342 && echo "embed-certs-348342" | sudo tee /etc/hostname
	I1025 10:56:55.540924  454751 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-348342
	
	I1025 10:56:55.541007  454751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:56:55.559401  454751 main.go:141] libmachine: Using SSH client type: native
	I1025 10:56:55.559724  454751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1025 10:56:55.559749  454751 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-348342' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-348342/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-348342' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:56:55.714279  454751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:56:55.714309  454751 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:56:55.714388  454751 ubuntu.go:190] setting up certificates
	I1025 10:56:55.714415  454751 provision.go:84] configureAuth start
	I1025 10:56:55.714499  454751 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-348342
	I1025 10:56:55.731441  454751 provision.go:143] copyHostCerts
	I1025 10:56:55.731511  454751 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:56:55.731536  454751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:56:55.731620  454751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:56:55.731726  454751 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:56:55.731737  454751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:56:55.731765  454751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:56:55.731831  454751 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:56:55.731842  454751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:56:55.731866  454751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:56:55.731926  454751 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.embed-certs-348342 san=[127.0.0.1 192.168.76.2 embed-certs-348342 localhost minikube]
	I1025 10:56:56.414963  454751 provision.go:177] copyRemoteCerts
	I1025 10:56:56.415039  454751 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:56:56.415082  454751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:56:56.433167  454751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/embed-certs-348342/id_rsa Username:docker}
	I1025 10:56:56.538049  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:56:56.558525  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1025 10:56:56.577561  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:56:56.596044  454751 provision.go:87] duration metric: took 881.592167ms to configureAuth
	I1025 10:56:56.596074  454751 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:56:56.596273  454751 config.go:182] Loaded profile config "embed-certs-348342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:56:56.596380  454751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:56:56.614633  454751 main.go:141] libmachine: Using SSH client type: native
	I1025 10:56:56.614969  454751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1025 10:56:56.614996  454751 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:56:56.975554  454751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:56:56.975573  454751 machine.go:96] duration metric: took 4.794859368s to provisionDockerMachine
	I1025 10:56:56.975584  454751 start.go:293] postStartSetup for "embed-certs-348342" (driver="docker")
	I1025 10:56:56.975595  454751 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:56:56.975669  454751 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:56:56.975710  454751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:56:56.997000  454751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/embed-certs-348342/id_rsa Username:docker}
	I1025 10:56:57.106188  454751 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:56:57.109511  454751 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:56:57.109591  454751 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:56:57.109618  454751 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:56:57.109681  454751 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:56:57.109769  454751 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:56:57.109877  454751 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:56:57.117339  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:56:57.136927  454751 start.go:296] duration metric: took 161.326672ms for postStartSetup
	I1025 10:56:57.137008  454751 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:56:57.137059  454751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:56:57.154756  454751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/embed-certs-348342/id_rsa Username:docker}
	I1025 10:56:57.262837  454751 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:56:57.268733  454751 fix.go:56] duration metric: took 5.445349479s for fixHost
	I1025 10:56:57.268757  454751 start.go:83] releasing machines lock for "embed-certs-348342", held for 5.445400162s
	I1025 10:56:57.268839  454751 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-348342
	I1025 10:56:57.285928  454751 ssh_runner.go:195] Run: cat /version.json
	I1025 10:56:57.286042  454751 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:56:57.286073  454751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:56:57.286119  454751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:56:57.303771  454751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/embed-certs-348342/id_rsa Username:docker}
	I1025 10:56:57.306253  454751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/embed-certs-348342/id_rsa Username:docker}
	I1025 10:56:57.413548  454751 ssh_runner.go:195] Run: systemctl --version
	I1025 10:56:57.533249  454751 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:56:57.577023  454751 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:56:57.581930  454751 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:56:57.582072  454751 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:56:57.591277  454751 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:56:57.591304  454751 start.go:495] detecting cgroup driver to use...
	I1025 10:56:57.591373  454751 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:56:57.591451  454751 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:56:57.607174  454751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:56:57.620561  454751 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:56:57.620654  454751 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:56:57.637281  454751 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:56:57.655757  454751 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:56:57.793765  454751 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:56:57.925017  454751 docker.go:234] disabling docker service ...
	I1025 10:56:57.925096  454751 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:56:57.941915  454751 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:56:57.955616  454751 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:56:58.084465  454751 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:56:58.216579  454751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:56:58.233280  454751 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:56:58.248840  454751 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:56:58.248925  454751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:56:58.265186  454751 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:56:58.265266  454751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:56:58.274415  454751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:56:58.283616  454751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:56:58.292866  454751 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:56:58.301170  454751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:56:58.310509  454751 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:56:58.320244  454751 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:56:58.330195  454751 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:56:58.338866  454751 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:56:58.347150  454751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:56:58.472799  454751 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:56:58.607302  454751 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:56:58.607425  454751 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:56:58.611913  454751 start.go:563] Will wait 60s for crictl version
	I1025 10:56:58.612019  454751 ssh_runner.go:195] Run: which crictl
	I1025 10:56:58.615893  454751 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:56:58.644417  454751 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:56:58.644591  454751 ssh_runner.go:195] Run: crio --version
	I1025 10:56:58.676454  454751 ssh_runner.go:195] Run: crio --version
	I1025 10:56:58.715607  454751 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:56:58.718778  454751 cli_runner.go:164] Run: docker network inspect embed-certs-348342 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:56:58.735770  454751 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 10:56:58.739775  454751 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:56:58.749949  454751 kubeadm.go:883] updating cluster {Name:embed-certs-348342 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-348342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:56:58.750091  454751 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:56:58.750165  454751 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:56:58.782993  454751 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:56:58.783017  454751 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:56:58.783080  454751 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:56:58.810614  454751 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:56:58.810640  454751 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:56:58.810648  454751 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1025 10:56:58.810765  454751 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-348342 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-348342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:56:58.810864  454751 ssh_runner.go:195] Run: crio config
	I1025 10:56:58.877932  454751 cni.go:84] Creating CNI manager for ""
	I1025 10:56:58.877954  454751 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:56:58.877973  454751 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:56:58.878008  454751 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-348342 NodeName:embed-certs-348342 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:56:58.878147  454751 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-348342"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:56:58.878221  454751 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:56:58.886957  454751 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:56:58.887033  454751 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:56:58.894538  454751 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1025 10:56:58.907793  454751 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:56:58.920336  454751 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1025 10:56:58.933581  454751 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:56:58.937579  454751 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:56:58.948734  454751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:56:59.079549  454751 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:56:59.098080  454751 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342 for IP: 192.168.76.2
	I1025 10:56:59.098156  454751 certs.go:195] generating shared ca certs ...
	I1025 10:56:59.098189  454751 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:56:59.098384  454751 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:56:59.098491  454751 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:56:59.098517  454751 certs.go:257] generating profile certs ...
	I1025 10:56:59.098658  454751 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/client.key
	I1025 10:56:59.098759  454751 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/apiserver.key.6c3cab22
	I1025 10:56:59.098837  454751 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/proxy-client.key
	I1025 10:56:59.098984  454751 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:56:59.099057  454751 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:56:59.099083  454751 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:56:59.099143  454751 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:56:59.099195  454751 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:56:59.099253  454751 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:56:59.099328  454751 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:56:59.100168  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:56:59.121277  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:56:59.141852  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:56:59.165080  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:56:59.190103  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 10:56:59.210733  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:56:59.236805  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:56:59.276626  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/embed-certs-348342/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1025 10:56:59.298913  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:56:59.327585  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:56:59.349122  454751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:56:59.369059  454751 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:56:59.383559  454751 ssh_runner.go:195] Run: openssl version
	I1025 10:56:59.392524  454751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:56:59.402789  454751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:56:59.407259  454751 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:56:59.407356  454751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:56:59.450710  454751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:56:59.459413  454751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:56:59.468884  454751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:56:59.472860  454751 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:56:59.472966  454751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:56:59.515196  454751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:56:59.523173  454751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:56:59.531373  454751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:56:59.535474  454751 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:56:59.535619  454751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:56:59.577815  454751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:56:59.585741  454751 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:56:59.589697  454751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:56:59.632511  454751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:56:59.674564  454751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:56:59.720113  454751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:56:59.779332  454751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:56:59.849092  454751 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:56:59.922105  454751 kubeadm.go:400] StartCluster: {Name:embed-certs-348342 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-348342 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:56:59.922193  454751 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:56:59.922265  454751 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:56:59.963052  454751 cri.go:89] found id: "4a176e83f06702f09feac763002a74b8b8a030874adc921f8bddd98aa3c974d4"
	I1025 10:56:59.963076  454751 cri.go:89] found id: "8fcdfc5fc2dc75f67348b352c94dacbcef58121b8688bd5a6ea85732681228cd"
	I1025 10:56:59.963082  454751 cri.go:89] found id: "c70dd3ad27c72e73d7f22a0f8ce5472875ecc49420f54d9480a48af44851b43d"
	I1025 10:56:59.963093  454751 cri.go:89] found id: "9e869b3a7afbb096c23279c50a357f29f02843cd43be8ae3176e4dc15d9e713d"
	I1025 10:56:59.963097  454751 cri.go:89] found id: ""
	I1025 10:56:59.963146  454751 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:56:59.975514  454751 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:56:59Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:56:59.975607  454751 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:56:59.987066  454751 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:56:59.987086  454751 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:56:59.987142  454751 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:56:59.997073  454751 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:56:59.997765  454751 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-348342" does not appear in /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:56:59.998091  454751 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-259409/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-348342" cluster setting kubeconfig missing "embed-certs-348342" context setting]
	I1025 10:56:59.998585  454751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:57:00.000488  454751 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:57:00.016591  454751 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1025 10:57:00.016631  454751 kubeadm.go:601] duration metric: took 29.537909ms to restartPrimaryControlPlane
	I1025 10:57:00.016643  454751 kubeadm.go:402] duration metric: took 94.54735ms to StartCluster
	I1025 10:57:00.016664  454751 settings.go:142] acquiring lock: {Name:mk78071aea90144fb2e8f63b90de4792d7a3522a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:57:00.016748  454751 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:57:00.018171  454751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:57:00.018509  454751 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:57:00.019059  454751 config.go:182] Loaded profile config "embed-certs-348342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:57:00.019020  454751 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:57:00.019116  454751 addons.go:69] Setting dashboard=true in profile "embed-certs-348342"
	I1025 10:57:00.019126  454751 addons.go:69] Setting default-storageclass=true in profile "embed-certs-348342"
	I1025 10:57:00.019132  454751 addons.go:238] Setting addon dashboard=true in "embed-certs-348342"
	W1025 10:57:00.019139  454751 addons.go:247] addon dashboard should already be in state true
	I1025 10:57:00.019138  454751 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-348342"
	I1025 10:57:00.019167  454751 host.go:66] Checking if "embed-certs-348342" exists ...
	I1025 10:57:00.019477  454751 cli_runner.go:164] Run: docker container inspect embed-certs-348342 --format={{.State.Status}}
	I1025 10:57:00.019687  454751 cli_runner.go:164] Run: docker container inspect embed-certs-348342 --format={{.State.Status}}
	I1025 10:57:00.019116  454751 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-348342"
	I1025 10:57:00.024347  454751 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-348342"
	W1025 10:57:00.025188  454751 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:57:00.025249  454751 host.go:66] Checking if "embed-certs-348342" exists ...
	I1025 10:57:00.029384  454751 cli_runner.go:164] Run: docker container inspect embed-certs-348342 --format={{.State.Status}}
	I1025 10:57:00.025165  454751 out.go:179] * Verifying Kubernetes components...
	I1025 10:57:00.033027  454751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:57:00.074936  454751 addons.go:238] Setting addon default-storageclass=true in "embed-certs-348342"
	W1025 10:57:00.074971  454751 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:57:00.075000  454751 host.go:66] Checking if "embed-certs-348342" exists ...
	I1025 10:57:00.075507  454751 cli_runner.go:164] Run: docker container inspect embed-certs-348342 --format={{.State.Status}}
	I1025 10:57:00.090069  454751 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 10:57:00.138581  454751 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:57:00.141708  454751 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1025 10:56:55.817185  451806 pod_ready.go:104] pod "coredns-66bc5c9577-w9r8g" is not "Ready", error: <nil>
	W1025 10:56:57.817383  451806 pod_ready.go:104] pod "coredns-66bc5c9577-w9r8g" is not "Ready", error: <nil>
	W1025 10:56:59.817452  451806 pod_ready.go:104] pod "coredns-66bc5c9577-w9r8g" is not "Ready", error: <nil>
	I1025 10:57:00.141714  454751 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:57:00.141813  454751 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:57:00.141900  454751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:57:00.146186  454751 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:57:00.146221  454751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:57:00.146303  454751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:57:00.185590  454751 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:57:00.185622  454751 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:57:00.185708  454751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:57:00.218534  454751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/embed-certs-348342/id_rsa Username:docker}
	I1025 10:57:00.235655  454751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/embed-certs-348342/id_rsa Username:docker}
	I1025 10:57:00.242282  454751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/embed-certs-348342/id_rsa Username:docker}
	I1025 10:57:00.480903  454751 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:57:00.480927  454751 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:57:00.503577  454751 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:57:00.545768  454751 node_ready.go:35] waiting up to 6m0s for node "embed-certs-348342" to be "Ready" ...
	I1025 10:57:00.551290  454751 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:57:00.551369  454751 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:57:00.587428  454751 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:57:00.587503  454751 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:57:00.590610  454751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:57:00.623813  454751 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:57:00.623888  454751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:57:00.646859  454751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:57:00.686461  454751 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:57:00.686540  454751 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:57:00.756729  454751 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:57:00.756803  454751 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:57:00.811016  454751 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:57:00.811091  454751 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:57:00.874615  454751 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:57:00.874701  454751 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:57:00.889112  454751 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:57:00.889195  454751 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:57:00.907205  454751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:57:02.317774  451806 pod_ready.go:94] pod "coredns-66bc5c9577-w9r8g" is "Ready"
	I1025 10:57:02.317858  451806 pod_ready.go:86] duration metric: took 36.50663028s for pod "coredns-66bc5c9577-w9r8g" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:02.325740  451806 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-223394" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:02.331186  451806 pod_ready.go:94] pod "etcd-default-k8s-diff-port-223394" is "Ready"
	I1025 10:57:02.331222  451806 pod_ready.go:86] duration metric: took 5.455434ms for pod "etcd-default-k8s-diff-port-223394" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:02.339430  451806 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-223394" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:02.345286  451806 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-223394" is "Ready"
	I1025 10:57:02.345323  451806 pod_ready.go:86] duration metric: took 5.864176ms for pod "kube-apiserver-default-k8s-diff-port-223394" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:02.348347  451806 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-223394" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:02.515952  451806 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-223394" is "Ready"
	I1025 10:57:02.515999  451806 pod_ready.go:86] duration metric: took 167.624099ms for pod "kube-controller-manager-default-k8s-diff-port-223394" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:02.715432  451806 pod_ready.go:83] waiting for pod "kube-proxy-zpq57" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:03.115303  451806 pod_ready.go:94] pod "kube-proxy-zpq57" is "Ready"
	I1025 10:57:03.115333  451806 pod_ready.go:86] duration metric: took 399.872536ms for pod "kube-proxy-zpq57" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:03.315540  451806 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-223394" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:03.715305  451806 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-223394" is "Ready"
	I1025 10:57:03.715337  451806 pod_ready.go:86] duration metric: took 399.766976ms for pod "kube-scheduler-default-k8s-diff-port-223394" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:03.715350  451806 pod_ready.go:40] duration metric: took 37.908136271s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:57:03.828803  451806 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:57:03.832103  451806 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-223394" cluster and "default" namespace by default
	I1025 10:57:05.183951  454751 node_ready.go:49] node "embed-certs-348342" is "Ready"
	I1025 10:57:05.183981  454751 node_ready.go:38] duration metric: took 4.638120054s for node "embed-certs-348342" to be "Ready" ...
	I1025 10:57:05.183995  454751 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:57:05.184058  454751 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:57:06.554541  454751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.963853417s)
	I1025 10:57:06.554610  454751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.907681741s)
	I1025 10:57:06.679020  454751 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.771723225s)
	I1025 10:57:06.679253  454751 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.495181757s)
	I1025 10:57:06.679290  454751 api_server.go:72] duration metric: took 6.660744922s to wait for apiserver process to appear ...
	I1025 10:57:06.679323  454751 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:57:06.679359  454751 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:57:06.683039  454751 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-348342 addons enable metrics-server
	
	I1025 10:57:06.688193  454751 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1025 10:57:06.688878  454751 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1025 10:57:06.690288  454751 api_server.go:141] control plane version: v1.34.1
	I1025 10:57:06.690311  454751 api_server.go:131] duration metric: took 10.965243ms to wait for apiserver health ...
	I1025 10:57:06.690321  454751 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:57:06.693050  454751 addons.go:514] duration metric: took 6.674023614s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1025 10:57:06.694452  454751 system_pods.go:59] 8 kube-system pods found
	I1025 10:57:06.694503  454751 system_pods.go:61] "coredns-66bc5c9577-sqrrf" [15846173-f49c-4d50-af52-3b1b371fde43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:57:06.694514  454751 system_pods.go:61] "etcd-embed-certs-348342" [65a59ffa-4cba-4290-8c46-07e62bcf564b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:57:06.694523  454751 system_pods.go:61] "kindnet-q5mzm" [4caa08ee-f6f3-442c-ad08-2be933f2869f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 10:57:06.694529  454751 system_pods.go:61] "kube-apiserver-embed-certs-348342" [b67dbed8-5ebd-4a9f-804c-ed82033d0e19] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:57:06.694536  454751 system_pods.go:61] "kube-controller-manager-embed-certs-348342" [9ce2257d-b332-492a-8553-f7736a99b5db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:57:06.694547  454751 system_pods.go:61] "kube-proxy-j9ngr" [946e15f1-043f-4f6e-a995-79bb16033e3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 10:57:06.694554  454751 system_pods.go:61] "kube-scheduler-embed-certs-348342" [4e9a5441-6278-4eca-82d2-606bda24b02d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:57:06.694560  454751 system_pods.go:61] "storage-provisioner" [9a91278c-945c-48bc-be8b-39e026d485b4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:57:06.694565  454751 system_pods.go:74] duration metric: took 4.239439ms to wait for pod list to return data ...
	I1025 10:57:06.694572  454751 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:57:06.701044  454751 default_sa.go:45] found service account: "default"
	I1025 10:57:06.701067  454751 default_sa.go:55] duration metric: took 6.488904ms for default service account to be created ...
	I1025 10:57:06.701076  454751 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:57:06.709385  454751 system_pods.go:86] 8 kube-system pods found
	I1025 10:57:06.709476  454751 system_pods.go:89] "coredns-66bc5c9577-sqrrf" [15846173-f49c-4d50-af52-3b1b371fde43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:57:06.709503  454751 system_pods.go:89] "etcd-embed-certs-348342" [65a59ffa-4cba-4290-8c46-07e62bcf564b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:57:06.709547  454751 system_pods.go:89] "kindnet-q5mzm" [4caa08ee-f6f3-442c-ad08-2be933f2869f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 10:57:06.709575  454751 system_pods.go:89] "kube-apiserver-embed-certs-348342" [b67dbed8-5ebd-4a9f-804c-ed82033d0e19] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:57:06.709600  454751 system_pods.go:89] "kube-controller-manager-embed-certs-348342" [9ce2257d-b332-492a-8553-f7736a99b5db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:57:06.709634  454751 system_pods.go:89] "kube-proxy-j9ngr" [946e15f1-043f-4f6e-a995-79bb16033e3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 10:57:06.709660  454751 system_pods.go:89] "kube-scheduler-embed-certs-348342" [4e9a5441-6278-4eca-82d2-606bda24b02d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:57:06.709690  454751 system_pods.go:89] "storage-provisioner" [9a91278c-945c-48bc-be8b-39e026d485b4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:57:06.709726  454751 system_pods.go:126] duration metric: took 8.643311ms to wait for k8s-apps to be running ...
	I1025 10:57:06.709753  454751 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:57:06.709841  454751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:57:06.730734  454751 system_svc.go:56] duration metric: took 20.971923ms WaitForService to wait for kubelet
	I1025 10:57:06.730813  454751 kubeadm.go:586] duration metric: took 6.712266693s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:57:06.730848  454751 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:57:06.735835  454751 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:57:06.735919  454751 node_conditions.go:123] node cpu capacity is 2
	I1025 10:57:06.735948  454751 node_conditions.go:105] duration metric: took 5.080465ms to run NodePressure ...
	I1025 10:57:06.735975  454751 start.go:241] waiting for startup goroutines ...
	I1025 10:57:06.736013  454751 start.go:246] waiting for cluster config update ...
	I1025 10:57:06.736041  454751 start.go:255] writing updated cluster config ...
	I1025 10:57:06.736361  454751 ssh_runner.go:195] Run: rm -f paused
	I1025 10:57:06.741191  454751 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:57:06.795560  454751 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sqrrf" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:57:08.824260  454751 pod_ready.go:104] pod "coredns-66bc5c9577-sqrrf" is not "Ready", error: <nil>
	W1025 10:57:11.304311  454751 pod_ready.go:104] pod "coredns-66bc5c9577-sqrrf" is not "Ready", error: <nil>
	W1025 10:57:13.802242  454751 pod_ready.go:104] pod "coredns-66bc5c9577-sqrrf" is not "Ready", error: <nil>
	W1025 10:57:16.302490  454751 pod_ready.go:104] pod "coredns-66bc5c9577-sqrrf" is not "Ready", error: <nil>
	W1025 10:57:18.803263  454751 pod_ready.go:104] pod "coredns-66bc5c9577-sqrrf" is not "Ready", error: <nil>
	W1025 10:57:21.306458  454751 pod_ready.go:104] pod "coredns-66bc5c9577-sqrrf" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 25 10:57:05 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:05.53490707Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:57:05 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:05.538031866Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:57:05 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:05.538171666Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:57:05 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:05.538241993Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:57:05 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:05.542070064Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:57:05 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:05.542237057Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:57:05 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:05.542308622Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:57:05 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:05.54560521Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:57:05 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:05.545769167Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:57:05 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:05.545844663Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:57:05 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:05.549393011Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:57:05 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:05.549554752Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:57:11 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:11.758839221Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a3304968-bf8e-4aa2-a4a5-f70c1e4061dc name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:57:11 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:11.763201517Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=84600204-6ac9-4583-81f5-cdd194ae7d0b name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:57:11 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:11.76629697Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lgh8j/dashboard-metrics-scraper" id=3e611786-69e1-4d4a-a35e-fc96be615df7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:57:11 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:11.766452656Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:57:11 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:11.779743197Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:57:11 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:11.789379542Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:57:11 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:11.827886008Z" level=info msg="Created container 880c32c6e4d36298f25604e265d4a1cd58e474073d844950f788323abad6cb80: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lgh8j/dashboard-metrics-scraper" id=3e611786-69e1-4d4a-a35e-fc96be615df7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:57:11 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:11.838340043Z" level=info msg="Starting container: 880c32c6e4d36298f25604e265d4a1cd58e474073d844950f788323abad6cb80" id=1cb80a04-a548-45b5-9471-df7bb7124eee name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:57:11 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:11.840195978Z" level=info msg="Started container" PID=1735 containerID=880c32c6e4d36298f25604e265d4a1cd58e474073d844950f788323abad6cb80 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lgh8j/dashboard-metrics-scraper id=1cb80a04-a548-45b5-9471-df7bb7124eee name=/runtime.v1.RuntimeService/StartContainer sandboxID=76dcaa57282a0821af4ea153a07cd05320da6dce1c061db52f01ad6ede203f5e
	Oct 25 10:57:11 default-k8s-diff-port-223394 conmon[1733]: conmon 880c32c6e4d36298f256 <ninfo>: container 1735 exited with status 1
	Oct 25 10:57:12 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:12.02657254Z" level=info msg="Removing container: 1d28ae23f2882ed4b53dbdc20f695a2eb7e4f8e7c0a8964f2612b92a7d5d2b34" id=5adfa69b-de70-4584-a164-461bb0ceadbf name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:57:12 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:12.04048323Z" level=info msg="Error loading conmon cgroup of container 1d28ae23f2882ed4b53dbdc20f695a2eb7e4f8e7c0a8964f2612b92a7d5d2b34: cgroup deleted" id=5adfa69b-de70-4584-a164-461bb0ceadbf name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:57:12 default-k8s-diff-port-223394 crio[650]: time="2025-10-25T10:57:12.049692421Z" level=info msg="Removed container 1d28ae23f2882ed4b53dbdc20f695a2eb7e4f8e7c0a8964f2612b92a7d5d2b34: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lgh8j/dashboard-metrics-scraper" id=5adfa69b-de70-4584-a164-461bb0ceadbf name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	880c32c6e4d36       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago       Exited              dashboard-metrics-scraper   3                   76dcaa57282a0       dashboard-metrics-scraper-6ffb444bf9-lgh8j             kubernetes-dashboard
	9ae681564efc7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           26 seconds ago       Running             storage-provisioner         2                   3907d5fe3c85e       storage-provisioner                                    kube-system
	006247d5cea81       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago       Running             kubernetes-dashboard        0                   8572972dd38bc       kubernetes-dashboard-855c9754f9-wmhcq                  kubernetes-dashboard
	87d8695047930       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           57 seconds ago       Running             coredns                     1                   4c3ee99bab7c6       coredns-66bc5c9577-w9r8g                               kube-system
	07b8b77e031a1       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   fca6e167999ac       busybox                                                default
	8fe3eb331d0de       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           57 seconds ago       Running             kube-proxy                  1                   7e763f2e49474       kube-proxy-zpq57                                       kube-system
	07de50a4075df       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           57 seconds ago       Exited              storage-provisioner         1                   3907d5fe3c85e       storage-provisioner                                    kube-system
	f62f9dca6b34e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   2bee7666853aa       kindnet-tclvn                                          kube-system
	eb487ee4e10f6       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   e7dc1a723bbdb       kube-scheduler-default-k8s-diff-port-223394            kube-system
	6c982a01974be       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   f86e2e440e3e4       etcd-default-k8s-diff-port-223394                      kube-system
	9779636f70f0c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   6f1887a3f8b06       kube-controller-manager-default-k8s-diff-port-223394   kube-system
	c82e104c40d0d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   5ad8bea0d798a       kube-apiserver-default-k8s-diff-port-223394            kube-system
	
	
	==> coredns [87d869504793063a19919eff743283ee2b55be58b9d8352930f36eb27a405677] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58091 - 23361 "HINFO IN 1723357095776880230.8227621694220695296. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018957768s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-223394
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-223394
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=default-k8s-diff-port-223394
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_54_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:54:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-223394
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:57:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:57:05 +0000   Sat, 25 Oct 2025 10:54:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:57:05 +0000   Sat, 25 Oct 2025 10:54:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:57:05 +0000   Sat, 25 Oct 2025 10:54:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:57:05 +0000   Sat, 25 Oct 2025 10:55:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-223394
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                de5cd403-6ec9-42cd-9429-85d79e1a8304
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-w9r8g                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m19s
	  kube-system                 etcd-default-k8s-diff-port-223394                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m24s
	  kube-system                 kindnet-tclvn                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m20s
	  kube-system                 kube-apiserver-default-k8s-diff-port-223394             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-223394    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-proxy-zpq57                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-scheduler-default-k8s-diff-port-223394             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-lgh8j              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-wmhcq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m18s                  kube-proxy       
	  Normal   Starting                 57s                    kube-proxy       
	  Normal   Starting                 2m32s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m32s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m32s (x9 over 2m32s)  kubelet          Node default-k8s-diff-port-223394 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m32s (x8 over 2m32s)  kubelet          Node default-k8s-diff-port-223394 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m32s (x7 over 2m32s)  kubelet          Node default-k8s-diff-port-223394 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m25s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m24s                  kubelet          Node default-k8s-diff-port-223394 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m24s                  kubelet          Node default-k8s-diff-port-223394 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m24s                  kubelet          Node default-k8s-diff-port-223394 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m21s                  node-controller  Node default-k8s-diff-port-223394 event: Registered Node default-k8s-diff-port-223394 in Controller
	  Normal   NodeReady                98s                    kubelet          Node default-k8s-diff-port-223394 status is now: NodeReady
	  Normal   Starting                 65s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 65s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-223394 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-223394 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-223394 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                    node-controller  Node default-k8s-diff-port-223394 event: Registered Node default-k8s-diff-port-223394 in Controller
	
	
	==> dmesg <==
	[Oct25 10:33] overlayfs: idmapped layers are currently not supported
	[Oct25 10:34] overlayfs: idmapped layers are currently not supported
	[Oct25 10:36] overlayfs: idmapped layers are currently not supported
	[ +23.146409] overlayfs: idmapped layers are currently not supported
	[ +12.090113] overlayfs: idmapped layers are currently not supported
	[Oct25 10:37] overlayfs: idmapped layers are currently not supported
	[ +24.992751] overlayfs: idmapped layers are currently not supported
	[Oct25 10:38] overlayfs: idmapped layers are currently not supported
	[Oct25 10:39] overlayfs: idmapped layers are currently not supported
	[Oct25 10:40] overlayfs: idmapped layers are currently not supported
	[Oct25 10:41] overlayfs: idmapped layers are currently not supported
	[Oct25 10:43] overlayfs: idmapped layers are currently not supported
	[Oct25 10:45] overlayfs: idmapped layers are currently not supported
	[ +32.163680] overlayfs: idmapped layers are currently not supported
	[Oct25 10:47] overlayfs: idmapped layers are currently not supported
	[Oct25 10:49] overlayfs: idmapped layers are currently not supported
	[Oct25 10:50] overlayfs: idmapped layers are currently not supported
	[Oct25 10:51] overlayfs: idmapped layers are currently not supported
	[  +5.236078] overlayfs: idmapped layers are currently not supported
	[Oct25 10:52] overlayfs: idmapped layers are currently not supported
	[Oct25 10:53] overlayfs: idmapped layers are currently not supported
	[Oct25 10:54] overlayfs: idmapped layers are currently not supported
	[Oct25 10:55] overlayfs: idmapped layers are currently not supported
	[Oct25 10:56] overlayfs: idmapped layers are currently not supported
	[ +41.501413] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6c982a01974bebe010fd07605d0e7e6f34d2e021c6ffb16dedef170e47c26875] <==
	{"level":"warn","ts":"2025-10-25T10:56:21.434569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.460529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.490721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.526661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.538491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.580254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.610239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.638091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.655162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.682231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.706228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.746424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.760050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.827164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.853497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.893341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.921550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.949097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:21.981908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:22.042831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:22.096078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:22.146358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:22.206150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:22.231377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:56:22.320780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55400","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:57:22 up  2:39,  0 user,  load average: 3.98, 3.51, 2.95
	Linux default-k8s-diff-port-223394 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f62f9dca6b34ebcc637ed54376e046a6d45148e3c61defb45110e0d36387c285] <==
	I1025 10:56:25.324833       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:56:25.325020       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:56:25.325149       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:56:25.325164       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:56:25.325174       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:56:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:56:25.614160       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:56:25.614259       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:56:25.614324       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:56:25.615454       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:56:55.530367       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 10:56:55.614978       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 10:56:55.616269       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 10:56:55.616427       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1025 10:56:56.715172       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:56:56.715215       1 metrics.go:72] Registering metrics
	I1025 10:56:56.715276       1 controller.go:711] "Syncing nftables rules"
	I1025 10:57:05.530101       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:57:05.530219       1 main.go:301] handling current node
	I1025 10:57:15.531751       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:57:15.531853       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c82e104c40d0dc2552e92b2571bdc6ca33dc11c21c904ce5b807e393939d0fe1] <==
	I1025 10:56:23.874468       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:56:23.879667       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 10:56:23.879689       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 10:56:23.879705       1 policy_source.go:240] refreshing policies
	I1025 10:56:23.881456       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:56:23.881483       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 10:56:23.881489       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 10:56:23.892846       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:56:23.936740       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 10:56:23.936898       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 10:56:23.937526       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 10:56:23.938033       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:56:23.961319       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 10:56:23.988209       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 10:56:24.462087       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:56:24.568187       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:56:24.623676       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:56:24.682328       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:56:24.700043       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:56:24.732657       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:56:25.014828       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.219.64"}
	I1025 10:56:25.059659       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.179.240"}
	I1025 10:56:27.269300       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:56:27.712087       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:56:27.835174       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9779636f70f0c278cba11f390d72d18ecf6492c685c187c54ca454f436e08653] <==
	I1025 10:56:27.254404       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:56:27.255350       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 10:56:27.255448       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:56:27.257943       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:56:27.258077       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:56:27.258466       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 10:56:27.258739       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 10:56:27.258857       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 10:56:27.260411       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 10:56:27.269563       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 10:56:27.269889       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 10:56:27.270024       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 10:56:27.273197       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 10:56:27.275296       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 10:56:27.275412       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 10:56:27.279048       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:56:27.282777       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 10:56:27.282827       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:56:27.288818       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 10:56:27.295261       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 10:56:27.295479       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 10:56:27.295590       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-223394"
	I1025 10:56:27.295686       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 10:56:27.296809       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:56:27.307809       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-proxy [8fe3eb331d0de316a3705b7640f3dfebe0d4ca0136afef7d77f67dcd835bce76] <==
	I1025 10:56:25.483984       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:56:25.600252       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:56:25.700483       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:56:25.700518       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 10:56:25.700607       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:56:25.720511       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:56:25.720567       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:56:25.724927       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:56:25.725290       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:56:25.725311       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:56:25.726197       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:56:25.726214       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:56:25.726500       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:56:25.726558       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:56:25.729024       1 config.go:309] "Starting node config controller"
	I1025 10:56:25.729089       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:56:25.729119       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:56:25.735939       1 config.go:200] "Starting service config controller"
	I1025 10:56:25.736036       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:56:25.736069       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:56:25.827185       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:56:25.827302       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [eb487ee4e10f68f40f25f7e75f8231d3678bb16df616131e9dd7d0bbf8f2f3ed] <==
	I1025 10:56:21.036809       1 serving.go:386] Generated self-signed cert in-memory
	W1025 10:56:23.799548       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 10:56:23.799668       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 10:56:23.799703       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 10:56:23.799755       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 10:56:23.914137       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:56:23.914166       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:56:23.928134       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:56:23.930492       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:56:23.930516       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:56:23.930536       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:56:24.042159       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:56:28 default-k8s-diff-port-223394 kubelet[780]: W1025 10:56:28.162187     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fdfe0713435ec0c9d963f0d524447bf680a521a643dff88e212a946e59a268a7/crio-8572972dd38bc93e4a40d6013ec132f234813569d85f5ed14d36400f8268d970 WatchSource:0}: Error finding container 8572972dd38bc93e4a40d6013ec132f234813569d85f5ed14d36400f8268d970: Status 404 returned error can't find the container with id 8572972dd38bc93e4a40d6013ec132f234813569d85f5ed14d36400f8268d970
	Oct 25 10:56:32 default-k8s-diff-port-223394 kubelet[780]: I1025 10:56:32.140953     780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 25 10:56:32 default-k8s-diff-port-223394 kubelet[780]: I1025 10:56:32.902542     780 scope.go:117] "RemoveContainer" containerID="5d714b9bbb9b6e23291d95edc0df57b5efb6e6d78ddee0ad1fb001cf714231e6"
	Oct 25 10:56:33 default-k8s-diff-port-223394 kubelet[780]: I1025 10:56:33.906679     780 scope.go:117] "RemoveContainer" containerID="5d714b9bbb9b6e23291d95edc0df57b5efb6e6d78ddee0ad1fb001cf714231e6"
	Oct 25 10:56:33 default-k8s-diff-port-223394 kubelet[780]: I1025 10:56:33.906958     780 scope.go:117] "RemoveContainer" containerID="2e65d01c457c37cb94eb964d5f2e47122236420bc787ce32d0d4be257f0a6ae5"
	Oct 25 10:56:33 default-k8s-diff-port-223394 kubelet[780]: E1025 10:56:33.907106     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lgh8j_kubernetes-dashboard(0f8d0c94-9b68-4974-bd20-76a4605246dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lgh8j" podUID="0f8d0c94-9b68-4974-bd20-76a4605246dd"
	Oct 25 10:56:34 default-k8s-diff-port-223394 kubelet[780]: I1025 10:56:34.909922     780 scope.go:117] "RemoveContainer" containerID="2e65d01c457c37cb94eb964d5f2e47122236420bc787ce32d0d4be257f0a6ae5"
	Oct 25 10:56:34 default-k8s-diff-port-223394 kubelet[780]: E1025 10:56:34.910578     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lgh8j_kubernetes-dashboard(0f8d0c94-9b68-4974-bd20-76a4605246dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lgh8j" podUID="0f8d0c94-9b68-4974-bd20-76a4605246dd"
	Oct 25 10:56:38 default-k8s-diff-port-223394 kubelet[780]: I1025 10:56:38.110100     780 scope.go:117] "RemoveContainer" containerID="2e65d01c457c37cb94eb964d5f2e47122236420bc787ce32d0d4be257f0a6ae5"
	Oct 25 10:56:38 default-k8s-diff-port-223394 kubelet[780]: E1025 10:56:38.110311     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lgh8j_kubernetes-dashboard(0f8d0c94-9b68-4974-bd20-76a4605246dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lgh8j" podUID="0f8d0c94-9b68-4974-bd20-76a4605246dd"
	Oct 25 10:56:48 default-k8s-diff-port-223394 kubelet[780]: I1025 10:56:48.757216     780 scope.go:117] "RemoveContainer" containerID="2e65d01c457c37cb94eb964d5f2e47122236420bc787ce32d0d4be257f0a6ae5"
	Oct 25 10:56:48 default-k8s-diff-port-223394 kubelet[780]: I1025 10:56:48.948059     780 scope.go:117] "RemoveContainer" containerID="2e65d01c457c37cb94eb964d5f2e47122236420bc787ce32d0d4be257f0a6ae5"
	Oct 25 10:56:48 default-k8s-diff-port-223394 kubelet[780]: I1025 10:56:48.948281     780 scope.go:117] "RemoveContainer" containerID="1d28ae23f2882ed4b53dbdc20f695a2eb7e4f8e7c0a8964f2612b92a7d5d2b34"
	Oct 25 10:56:48 default-k8s-diff-port-223394 kubelet[780]: E1025 10:56:48.948475     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lgh8j_kubernetes-dashboard(0f8d0c94-9b68-4974-bd20-76a4605246dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lgh8j" podUID="0f8d0c94-9b68-4974-bd20-76a4605246dd"
	Oct 25 10:56:48 default-k8s-diff-port-223394 kubelet[780]: I1025 10:56:48.969922     780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wmhcq" podStartSLOduration=11.82807827 podStartE2EDuration="21.969904735s" podCreationTimestamp="2025-10-25 10:56:27 +0000 UTC" firstStartedPulling="2025-10-25 10:56:28.165053013 +0000 UTC m=+10.622568296" lastFinishedPulling="2025-10-25 10:56:38.306879486 +0000 UTC m=+20.764394761" observedRunningTime="2025-10-25 10:56:38.941735969 +0000 UTC m=+21.399251252" watchObservedRunningTime="2025-10-25 10:56:48.969904735 +0000 UTC m=+31.427420018"
	Oct 25 10:56:55 default-k8s-diff-port-223394 kubelet[780]: I1025 10:56:55.968540     780 scope.go:117] "RemoveContainer" containerID="07de50a4075df0e36d34be8ef0e96165ff750994856c24e4902fe73fc3fcb1fb"
	Oct 25 10:56:58 default-k8s-diff-port-223394 kubelet[780]: I1025 10:56:58.109866     780 scope.go:117] "RemoveContainer" containerID="1d28ae23f2882ed4b53dbdc20f695a2eb7e4f8e7c0a8964f2612b92a7d5d2b34"
	Oct 25 10:56:58 default-k8s-diff-port-223394 kubelet[780]: E1025 10:56:58.110132     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lgh8j_kubernetes-dashboard(0f8d0c94-9b68-4974-bd20-76a4605246dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lgh8j" podUID="0f8d0c94-9b68-4974-bd20-76a4605246dd"
	Oct 25 10:57:11 default-k8s-diff-port-223394 kubelet[780]: I1025 10:57:11.757382     780 scope.go:117] "RemoveContainer" containerID="1d28ae23f2882ed4b53dbdc20f695a2eb7e4f8e7c0a8964f2612b92a7d5d2b34"
	Oct 25 10:57:12 default-k8s-diff-port-223394 kubelet[780]: I1025 10:57:12.018051     780 scope.go:117] "RemoveContainer" containerID="1d28ae23f2882ed4b53dbdc20f695a2eb7e4f8e7c0a8964f2612b92a7d5d2b34"
	Oct 25 10:57:12 default-k8s-diff-port-223394 kubelet[780]: I1025 10:57:12.018464     780 scope.go:117] "RemoveContainer" containerID="880c32c6e4d36298f25604e265d4a1cd58e474073d844950f788323abad6cb80"
	Oct 25 10:57:12 default-k8s-diff-port-223394 kubelet[780]: E1025 10:57:12.018666     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lgh8j_kubernetes-dashboard(0f8d0c94-9b68-4974-bd20-76a4605246dd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lgh8j" podUID="0f8d0c94-9b68-4974-bd20-76a4605246dd"
	Oct 25 10:57:17 default-k8s-diff-port-223394 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:57:17 default-k8s-diff-port-223394 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:57:17 default-k8s-diff-port-223394 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [006247d5cea81537221f79175bf519982dcf6cbc03bf6367c41f687ef833cf21] <==
	2025/10/25 10:56:38 Using namespace: kubernetes-dashboard
	2025/10/25 10:56:38 Using in-cluster config to connect to apiserver
	2025/10/25 10:56:38 Using secret token for csrf signing
	2025/10/25 10:56:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 10:56:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 10:56:38 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 10:56:38 Generating JWE encryption key
	2025/10/25 10:56:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 10:56:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 10:56:39 Initializing JWE encryption key from synchronized object
	2025/10/25 10:56:39 Creating in-cluster Sidecar client
	2025/10/25 10:56:39 Serving insecurely on HTTP port: 9090
	2025/10/25 10:56:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:57:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:56:38 Starting overwatch
	
	
	==> storage-provisioner [07de50a4075df0e36d34be8ef0e96165ff750994856c24e4902fe73fc3fcb1fb] <==
	I1025 10:56:25.347878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 10:56:55.350260       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9ae681564efc76dca39400bd2a4f79850bccb015cd2dac17da552bc3b801e930] <==
	I1025 10:56:56.071423       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:56:56.083990       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:56:56.084275       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 10:56:56.089269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:56:59.544905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:03.814426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:07.413239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:10.468193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:13.491915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:13.498649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:57:13.498808       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:57:13.501248       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-223394_adc12a52-6957-4610-be4e-05dda88c2ada!
	I1025 10:57:13.503047       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"349ccd11-8226-4feb-9ee3-b35b622cb7d9", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-223394_adc12a52-6957-4610-be4e-05dda88c2ada became leader
	W1025 10:57:13.503392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:13.522295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:57:13.602356       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-223394_adc12a52-6957-4610-be4e-05dda88c2ada!
	W1025 10:57:15.527656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:15.534331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:17.538071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:17.547324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:19.551750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:19.557676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:21.561328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:21.570247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-223394 -n default-k8s-diff-port-223394
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-223394 -n default-k8s-diff-port-223394: exit status 2 (428.479551ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-223394 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (8.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-348342 --alsologtostderr -v=1
E1025 10:57:59.743751  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-348342 --alsologtostderr -v=1: exit status 80 (2.235200449s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-348342 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:57:58.364032  461064 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:57:58.364196  461064 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:57:58.364203  461064 out.go:374] Setting ErrFile to fd 2...
	I1025 10:57:58.364207  461064 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:57:58.364460  461064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:57:58.364709  461064 out.go:368] Setting JSON to false
	I1025 10:57:58.364732  461064 mustload.go:65] Loading cluster: embed-certs-348342
	I1025 10:57:58.365168  461064 config.go:182] Loaded profile config "embed-certs-348342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:57:58.365656  461064 cli_runner.go:164] Run: docker container inspect embed-certs-348342 --format={{.State.Status}}
	I1025 10:57:58.387605  461064 host.go:66] Checking if "embed-certs-348342" exists ...
	I1025 10:57:58.387965  461064 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:57:58.482969  461064 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:59 OomKillDisable:true NGoroutines:70 SystemTime:2025-10-25 10:57:58.472709381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:57:58.483802  461064 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-348342 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 10:57:58.489178  461064 out.go:179] * Pausing node embed-certs-348342 ... 
	I1025 10:57:58.492173  461064 host.go:66] Checking if "embed-certs-348342" exists ...
	I1025 10:57:58.492528  461064 ssh_runner.go:195] Run: systemctl --version
	I1025 10:57:58.492584  461064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348342
	I1025 10:57:58.514527  461064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/embed-certs-348342/id_rsa Username:docker}
	I1025 10:57:58.625203  461064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:57:58.659189  461064 pause.go:52] kubelet running: true
	I1025 10:57:58.659260  461064 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:57:58.949425  461064 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:57:58.949518  461064 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:57:59.031523  461064 cri.go:89] found id: "d5e3c73b3bb3432e2b9fbc1613b368968b855342702a5992c6d90219ffc7d2f4"
	I1025 10:57:59.031564  461064 cri.go:89] found id: "1431eefc9516720f0d87f27ee40753a17e6a3e1cdee8ecb4cadc2a143a7a7f26"
	I1025 10:57:59.031570  461064 cri.go:89] found id: "e62944b5dc1016625be50b0fd9819e27fccc5caa6393d6057a1e8c1b42dd6493"
	I1025 10:57:59.031576  461064 cri.go:89] found id: "3c115eaa48c2ed4a4235288bce281b06608a49db9d4580641620a0c3eee76305"
	I1025 10:57:59.031580  461064 cri.go:89] found id: "ec5a649a4f3eaa7fedb4b62e1ed03f701beb70acb1dad6d653e9d16c77f9c2c0"
	I1025 10:57:59.031584  461064 cri.go:89] found id: "4a176e83f06702f09feac763002a74b8b8a030874adc921f8bddd98aa3c974d4"
	I1025 10:57:59.031587  461064 cri.go:89] found id: "8fcdfc5fc2dc75f67348b352c94dacbcef58121b8688bd5a6ea85732681228cd"
	I1025 10:57:59.031591  461064 cri.go:89] found id: "c70dd3ad27c72e73d7f22a0f8ce5472875ecc49420f54d9480a48af44851b43d"
	I1025 10:57:59.031595  461064 cri.go:89] found id: "9e869b3a7afbb096c23279c50a357f29f02843cd43be8ae3176e4dc15d9e713d"
	I1025 10:57:59.031603  461064 cri.go:89] found id: "5a71a6b9c4cc471507afdaeb55835fbb3036fed68bb602a6960c9402df693838"
	I1025 10:57:59.031610  461064 cri.go:89] found id: "b665fbb37c2b93157ebbfdb2f5bf74ca890f415c87fe011f26d3fb206ab2b0a8"
	I1025 10:57:59.031614  461064 cri.go:89] found id: ""
	I1025 10:57:59.031665  461064 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:57:59.044095  461064 retry.go:31] will retry after 238.908484ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:57:59Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:57:59.283669  461064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:57:59.303103  461064 pause.go:52] kubelet running: false
	I1025 10:57:59.303189  461064 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:57:59.530946  461064 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:57:59.531172  461064 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:57:59.603032  461064 cri.go:89] found id: "d5e3c73b3bb3432e2b9fbc1613b368968b855342702a5992c6d90219ffc7d2f4"
	I1025 10:57:59.603058  461064 cri.go:89] found id: "1431eefc9516720f0d87f27ee40753a17e6a3e1cdee8ecb4cadc2a143a7a7f26"
	I1025 10:57:59.603063  461064 cri.go:89] found id: "e62944b5dc1016625be50b0fd9819e27fccc5caa6393d6057a1e8c1b42dd6493"
	I1025 10:57:59.603067  461064 cri.go:89] found id: "3c115eaa48c2ed4a4235288bce281b06608a49db9d4580641620a0c3eee76305"
	I1025 10:57:59.603070  461064 cri.go:89] found id: "ec5a649a4f3eaa7fedb4b62e1ed03f701beb70acb1dad6d653e9d16c77f9c2c0"
	I1025 10:57:59.603073  461064 cri.go:89] found id: "4a176e83f06702f09feac763002a74b8b8a030874adc921f8bddd98aa3c974d4"
	I1025 10:57:59.603076  461064 cri.go:89] found id: "8fcdfc5fc2dc75f67348b352c94dacbcef58121b8688bd5a6ea85732681228cd"
	I1025 10:57:59.603078  461064 cri.go:89] found id: "c70dd3ad27c72e73d7f22a0f8ce5472875ecc49420f54d9480a48af44851b43d"
	I1025 10:57:59.603081  461064 cri.go:89] found id: "9e869b3a7afbb096c23279c50a357f29f02843cd43be8ae3176e4dc15d9e713d"
	I1025 10:57:59.603117  461064 cri.go:89] found id: "5a71a6b9c4cc471507afdaeb55835fbb3036fed68bb602a6960c9402df693838"
	I1025 10:57:59.603143  461064 cri.go:89] found id: "b665fbb37c2b93157ebbfdb2f5bf74ca890f415c87fe011f26d3fb206ab2b0a8"
	I1025 10:57:59.603162  461064 cri.go:89] found id: ""
	I1025 10:57:59.603213  461064 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:57:59.614368  461064 retry.go:31] will retry after 309.892737ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:57:59Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:57:59.924927  461064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:57:59.945918  461064 pause.go:52] kubelet running: false
	I1025 10:57:59.946037  461064 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:58:00.331207  461064 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:58:00.331309  461064 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:58:00.494023  461064 cri.go:89] found id: "d5e3c73b3bb3432e2b9fbc1613b368968b855342702a5992c6d90219ffc7d2f4"
	I1025 10:58:00.494052  461064 cri.go:89] found id: "1431eefc9516720f0d87f27ee40753a17e6a3e1cdee8ecb4cadc2a143a7a7f26"
	I1025 10:58:00.494058  461064 cri.go:89] found id: "e62944b5dc1016625be50b0fd9819e27fccc5caa6393d6057a1e8c1b42dd6493"
	I1025 10:58:00.494062  461064 cri.go:89] found id: "3c115eaa48c2ed4a4235288bce281b06608a49db9d4580641620a0c3eee76305"
	I1025 10:58:00.494066  461064 cri.go:89] found id: "ec5a649a4f3eaa7fedb4b62e1ed03f701beb70acb1dad6d653e9d16c77f9c2c0"
	I1025 10:58:00.494070  461064 cri.go:89] found id: "4a176e83f06702f09feac763002a74b8b8a030874adc921f8bddd98aa3c974d4"
	I1025 10:58:00.494073  461064 cri.go:89] found id: "8fcdfc5fc2dc75f67348b352c94dacbcef58121b8688bd5a6ea85732681228cd"
	I1025 10:58:00.494076  461064 cri.go:89] found id: "c70dd3ad27c72e73d7f22a0f8ce5472875ecc49420f54d9480a48af44851b43d"
	I1025 10:58:00.494079  461064 cri.go:89] found id: "9e869b3a7afbb096c23279c50a357f29f02843cd43be8ae3176e4dc15d9e713d"
	I1025 10:58:00.494087  461064 cri.go:89] found id: "5a71a6b9c4cc471507afdaeb55835fbb3036fed68bb602a6960c9402df693838"
	I1025 10:58:00.494091  461064 cri.go:89] found id: "b665fbb37c2b93157ebbfdb2f5bf74ca890f415c87fe011f26d3fb206ab2b0a8"
	I1025 10:58:00.494094  461064 cri.go:89] found id: ""
	I1025 10:58:00.494151  461064 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:58:00.520976  461064 out.go:203] 
	W1025 10:58:00.524034  461064 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:58:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:58:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 10:58:00.524059  461064 out.go:285] * 
	* 
	W1025 10:58:00.530106  461064 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 10:58:00.532881  461064 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-348342 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-348342
helpers_test.go:243: (dbg) docker inspect embed-certs-348342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4",
	        "Created": "2025-10-25T10:55:14.663333918Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 454884,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:56:51.874772685Z",
	            "FinishedAt": "2025-10-25T10:56:51.022204377Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4/hostname",
	        "HostsPath": "/var/lib/docker/containers/f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4/hosts",
	        "LogPath": "/var/lib/docker/containers/f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4/f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4-json.log",
	        "Name": "/embed-certs-348342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-348342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-348342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4",
	                "LowerDir": "/var/lib/docker/overlay2/7af8a0a0e4548ff306a21c56011c7ef1e62940e78c923925b89499c6f933074a-init/diff:/var/lib/docker/overlay2/a746402fafa86965bd9394d930bb96ec16982037f9f5e7f4f1de6d09576ff851/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7af8a0a0e4548ff306a21c56011c7ef1e62940e78c923925b89499c6f933074a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7af8a0a0e4548ff306a21c56011c7ef1e62940e78c923925b89499c6f933074a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7af8a0a0e4548ff306a21c56011c7ef1e62940e78c923925b89499c6f933074a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-348342",
	                "Source": "/var/lib/docker/volumes/embed-certs-348342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-348342",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-348342",
	                "name.minikube.sigs.k8s.io": "embed-certs-348342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a71712838e7cfe072d0916f031751a5ebf1fdda02ee2ee24d555f4d6f99dc3e3",
	            "SandboxKey": "/var/run/docker/netns/a71712838e7c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-348342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:16:a8:b6:48:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9165ff42962d9a3f99eefc8873610a74534a4c5300b06a1e9249fa26eacccff4",
	                    "EndpointID": "ab90b946da0d243aef1ee2036d3c376806832855307e0e2618a2e4c1ea4edf9b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-348342",
	                        "f2631e70db67"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-348342 -n embed-certs-348342
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-348342 -n embed-certs-348342: exit status 2 (492.30662ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-348342 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-348342 logs -n 25: (2.120400821s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-031983 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:53 UTC │ 25 Oct 25 10:54 UTC │
	│ image   │ old-k8s-version-031983 image list --format=json                                                                                                                                                                                               │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:54 UTC │
	│ pause   │ -p old-k8s-version-031983 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │                     │
	│ delete  │ -p old-k8s-version-031983                                                                                                                                                                                                                     │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:54 UTC │
	│ delete  │ -p old-k8s-version-031983                                                                                                                                                                                                                     │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:54 UTC │
	│ start   │ -p default-k8s-diff-port-223394 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:55 UTC │
	│ start   │ -p cert-expiration-736062 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-736062       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:55 UTC │
	│ delete  │ -p cert-expiration-736062                                                                                                                                                                                                                     │ cert-expiration-736062       │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │ 25 Oct 25 10:55 UTC │
	│ start   │ -p embed-certs-348342 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │ 25 Oct 25 10:56 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-223394 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-223394 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │ 25 Oct 25 10:56 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-223394 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:56 UTC │
	│ start   │ -p default-k8s-diff-port-223394 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-348342 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │                     │
	│ stop    │ -p embed-certs-348342 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-348342 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:56 UTC │
	│ start   │ -p embed-certs-348342 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:57 UTC │
	│ image   │ default-k8s-diff-port-223394 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ pause   │ -p default-k8s-diff-port-223394 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-223394                                                                                                                                                                                                               │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ delete  │ -p default-k8s-diff-port-223394                                                                                                                                                                                                               │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ delete  │ -p disable-driver-mounts-487220                                                                                                                                                                                                               │ disable-driver-mounts-487220 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ start   │ -p no-preload-093313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │                     │
	│ image   │ embed-certs-348342 image list --format=json                                                                                                                                                                                                   │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ pause   │ -p embed-certs-348342 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:57:27
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:57:27.266994  458353 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:57:27.267121  458353 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:57:27.267133  458353 out.go:374] Setting ErrFile to fd 2...
	I1025 10:57:27.267139  458353 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:57:27.267387  458353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:57:27.267802  458353 out.go:368] Setting JSON to false
	I1025 10:57:27.268769  458353 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9599,"bootTime":1761380249,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 10:57:27.268836  458353 start.go:141] virtualization:  
	I1025 10:57:27.272654  458353 out.go:179] * [no-preload-093313] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:57:27.276720  458353 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:57:27.276821  458353 notify.go:220] Checking for updates...
	I1025 10:57:27.282832  458353 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:57:27.285825  458353 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:57:27.288870  458353 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 10:57:27.291919  458353 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:57:27.294912  458353 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:57:27.298612  458353 config.go:182] Loaded profile config "embed-certs-348342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:57:27.298761  458353 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:57:27.333894  458353 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:57:27.334043  458353 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:57:27.396018  458353 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:57:27.386534022 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:57:27.396127  458353 docker.go:318] overlay module found
	I1025 10:57:27.399328  458353 out.go:179] * Using the docker driver based on user configuration
	I1025 10:57:27.402249  458353 start.go:305] selected driver: docker
	I1025 10:57:27.402283  458353 start.go:925] validating driver "docker" against <nil>
	I1025 10:57:27.402297  458353 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:57:27.403081  458353 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:57:27.463467  458353 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:57:27.453826266 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:57:27.463628  458353 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 10:57:27.463872  458353 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:57:27.466740  458353 out.go:179] * Using Docker driver with root privileges
	I1025 10:57:27.470035  458353 cni.go:84] Creating CNI manager for ""
	I1025 10:57:27.470110  458353 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:57:27.470128  458353 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:57:27.470214  458353 start.go:349] cluster config:
	{Name:no-preload-093313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-093313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:57:27.473392  458353 out.go:179] * Starting "no-preload-093313" primary control-plane node in "no-preload-093313" cluster
	I1025 10:57:27.476214  458353 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:57:27.479350  458353 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:57:27.482984  458353 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:57:27.483084  458353 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:57:27.483188  458353 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/config.json ...
	I1025 10:57:27.483247  458353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/config.json: {Name:mkbe6200dcd9ee7626a1ef8f7eea52da25c61105 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:57:27.486335  458353 cache.go:107] acquiring lock: {Name:mke50a780b6f2fd20bf0f3807e5c55f2165bbc2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:57:27.486487  458353 cache.go:115] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 10:57:27.486503  458353 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.921421ms
	I1025 10:57:27.486570  458353 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 10:57:27.486597  458353 cache.go:107] acquiring lock: {Name:mk6e894f2fc5a822328f2889957353638b611d87 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:57:27.487374  458353 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:57:27.487723  458353 cache.go:107] acquiring lock: {Name:mk31460a278f5ce669dba0a3edc67dec38888d3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:57:27.487829  458353 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:57:27.488043  458353 cache.go:107] acquiring lock: {Name:mk4eab06b911708d94fc84824aa5eaf12c5f728f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:57:27.488168  458353 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:57:27.488419  458353 cache.go:107] acquiring lock: {Name:mk0fabb771ebb58b343ccbfcf727bcc4ba36d3bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:57:27.488589  458353 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:57:27.488890  458353 cache.go:107] acquiring lock: {Name:mk9b73d996269c05e36f39d743e660929113e3bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:57:27.488986  458353 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1025 10:57:27.489234  458353 cache.go:107] acquiring lock: {Name:mka63f62ad185c4a0c57416430877cf896f4796b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:57:27.489333  458353 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1025 10:57:27.489604  458353 cache.go:107] acquiring lock: {Name:mk3432d572d15dfd7f5ddfb6ca632d44b3f5c29a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:57:27.490393  458353 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:57:27.495484  458353 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:57:27.498083  458353 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:57:27.498301  458353 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:57:27.498548  458353 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:57:27.498747  458353 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1025 10:57:27.499648  458353 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:57:27.499833  458353 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1025 10:57:27.503605  458353 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:57:27.503625  458353 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:57:27.503639  458353 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:57:27.503673  458353 start.go:360] acquireMachinesLock for no-preload-093313: {Name:mk08df2ba22812bd327cf8f3a536e0d3054c6132 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:57:27.503778  458353 start.go:364] duration metric: took 89.904µs to acquireMachinesLock for "no-preload-093313"
	I1025 10:57:27.503803  458353 start.go:93] Provisioning new machine with config: &{Name:no-preload-093313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-093313 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:57:27.503867  458353 start.go:125] createHost starting for "" (driver="docker")
	W1025 10:57:28.303215  454751 pod_ready.go:104] pod "coredns-66bc5c9577-sqrrf" is not "Ready", error: <nil>
	W1025 10:57:30.802156  454751 pod_ready.go:104] pod "coredns-66bc5c9577-sqrrf" is not "Ready", error: <nil>
	I1025 10:57:27.507621  458353 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:57:27.507876  458353 start.go:159] libmachine.API.Create for "no-preload-093313" (driver="docker")
	I1025 10:57:27.507911  458353 client.go:168] LocalClient.Create starting
	I1025 10:57:27.508065  458353 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem
	I1025 10:57:27.508111  458353 main.go:141] libmachine: Decoding PEM data...
	I1025 10:57:27.508134  458353 main.go:141] libmachine: Parsing certificate...
	I1025 10:57:27.508196  458353 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem
	I1025 10:57:27.508220  458353 main.go:141] libmachine: Decoding PEM data...
	I1025 10:57:27.508235  458353 main.go:141] libmachine: Parsing certificate...
	I1025 10:57:27.508682  458353 cli_runner.go:164] Run: docker network inspect no-preload-093313 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:57:27.525520  458353 cli_runner.go:211] docker network inspect no-preload-093313 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:57:27.525611  458353 network_create.go:284] running [docker network inspect no-preload-093313] to gather additional debugging logs...
	I1025 10:57:27.525636  458353 cli_runner.go:164] Run: docker network inspect no-preload-093313
	W1025 10:57:27.547695  458353 cli_runner.go:211] docker network inspect no-preload-093313 returned with exit code 1
	I1025 10:57:27.547733  458353 network_create.go:287] error running [docker network inspect no-preload-093313]: docker network inspect no-preload-093313: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-093313 not found
	I1025 10:57:27.547747  458353 network_create.go:289] output of [docker network inspect no-preload-093313]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-093313 not found
	
	** /stderr **
	I1025 10:57:27.547842  458353 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:57:27.565835  458353 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2218a4d410c8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:56:a0:c3:54:c6:1f} reservation:<nil>}
	I1025 10:57:27.566348  458353 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-249eaf2d238d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:87:b9:4d:4c:0d} reservation:<nil>}
	I1025 10:57:27.566626  458353 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-210d4b236ff6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:d5:32:45:e6:85} reservation:<nil>}
	I1025 10:57:27.566942  458353 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-9165ff42962d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4a:94:0e:3f:4d:73} reservation:<nil>}
	I1025 10:57:27.567409  458353 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c2f170}
	I1025 10:57:27.567433  458353 network_create.go:124] attempt to create docker network no-preload-093313 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1025 10:57:27.567510  458353 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-093313 no-preload-093313
	I1025 10:57:27.625702  458353 network_create.go:108] docker network no-preload-093313 192.168.85.0/24 created
	I1025 10:57:27.625744  458353 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-093313" container
	I1025 10:57:27.625817  458353 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:57:27.651138  458353 cli_runner.go:164] Run: docker volume create no-preload-093313 --label name.minikube.sigs.k8s.io=no-preload-093313 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:57:27.675686  458353 oci.go:103] Successfully created a docker volume no-preload-093313
	I1025 10:57:27.675766  458353 cli_runner.go:164] Run: docker run --rm --name no-preload-093313-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-093313 --entrypoint /usr/bin/test -v no-preload-093313:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:57:27.815243  458353 cache.go:162] opening:  /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1025 10:57:27.835924  458353 cache.go:162] opening:  /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1025 10:57:27.840693  458353 cache.go:162] opening:  /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1025 10:57:27.845761  458353 cache.go:162] opening:  /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1025 10:57:27.857456  458353 cache.go:162] opening:  /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1025 10:57:27.859707  458353 cache.go:162] opening:  /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1025 10:57:27.867008  458353 cache.go:162] opening:  /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1025 10:57:27.922979  458353 cache.go:157] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1025 10:57:27.923030  458353 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 434.144085ms
	I1025 10:57:27.923045  458353 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1025 10:57:28.302176  458353 cache.go:157] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1025 10:57:28.302257  458353 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 813.839915ms
	I1025 10:57:28.302309  458353 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1025 10:57:28.347360  458353 oci.go:107] Successfully prepared a docker volume no-preload-093313
	I1025 10:57:28.347407  458353 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1025 10:57:28.347559  458353 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 10:57:28.347687  458353 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 10:57:28.407183  458353 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-093313 --name no-preload-093313 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-093313 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-093313 --network no-preload-093313 --ip 192.168.85.2 --volume no-preload-093313:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 10:57:28.851855  458353 cache.go:157] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1025 10:57:28.851884  458353 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.363841826s
	I1025 10:57:28.851896  458353 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1025 10:57:28.864131  458353 cache.go:157] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1025 10:57:28.867099  458353 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.379370089s
	I1025 10:57:28.867176  458353 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1025 10:57:28.887622  458353 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Running}}
	I1025 10:57:28.928670  458353 cache.go:157] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1025 10:57:28.928693  458353 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.439097284s
	I1025 10:57:28.928706  458353 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1025 10:57:28.938558  458353 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Status}}
	I1025 10:57:28.979382  458353 cli_runner.go:164] Run: docker exec no-preload-093313 stat /var/lib/dpkg/alternatives/iptables
	I1025 10:57:29.055045  458353 cache.go:157] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1025 10:57:29.055127  458353 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.568531939s
	I1025 10:57:29.055173  458353 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1025 10:57:29.082768  458353 oci.go:144] the created container "no-preload-093313" has a running status.
	I1025 10:57:29.082799  458353 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa...
	I1025 10:57:30.008547  458353 cache.go:157] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1025 10:57:30.008640  458353 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.519409679s
	I1025 10:57:30.008669  458353 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1025 10:57:30.008777  458353 cache.go:87] Successfully saved all images to host disk.
	I1025 10:57:30.304023  458353 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 10:57:30.328526  458353 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Status}}
	I1025 10:57:30.349289  458353 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 10:57:30.349314  458353 kic_runner.go:114] Args: [docker exec --privileged no-preload-093313 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 10:57:30.399003  458353 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Status}}
	I1025 10:57:30.427588  458353 machine.go:93] provisionDockerMachine start ...
	I1025 10:57:30.427684  458353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:57:30.453517  458353 main.go:141] libmachine: Using SSH client type: native
	I1025 10:57:30.453880  458353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1025 10:57:30.453892  458353 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:57:30.622636  458353 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-093313
	
	I1025 10:57:30.622724  458353 ubuntu.go:182] provisioning hostname "no-preload-093313"
	I1025 10:57:30.622839  458353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:57:30.660734  458353 main.go:141] libmachine: Using SSH client type: native
	I1025 10:57:30.661052  458353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1025 10:57:30.661064  458353 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-093313 && echo "no-preload-093313" | sudo tee /etc/hostname
	I1025 10:57:30.824759  458353 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-093313
	
	I1025 10:57:30.824840  458353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:57:30.843271  458353 main.go:141] libmachine: Using SSH client type: native
	I1025 10:57:30.843578  458353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1025 10:57:30.843603  458353 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-093313' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-093313/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-093313' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:57:30.994203  458353 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:57:30.994231  458353 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:57:30.994261  458353 ubuntu.go:190] setting up certificates
	I1025 10:57:30.994272  458353 provision.go:84] configureAuth start
	I1025 10:57:30.994337  458353 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-093313
	I1025 10:57:31.016332  458353 provision.go:143] copyHostCerts
	I1025 10:57:31.016409  458353 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:57:31.016422  458353 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:57:31.016506  458353 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:57:31.016609  458353 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:57:31.016621  458353 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:57:31.016649  458353 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:57:31.016707  458353 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:57:31.016718  458353 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:57:31.016743  458353 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:57:31.016797  458353 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.no-preload-093313 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-093313]
	I1025 10:57:31.101971  458353 provision.go:177] copyRemoteCerts
	I1025 10:57:31.102060  458353 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:57:31.102112  458353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:57:31.121094  458353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:57:31.226026  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:57:31.245034  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:57:31.262816  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:57:31.280790  458353 provision.go:87] duration metric: took 286.491926ms to configureAuth
	I1025 10:57:31.280860  458353 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:57:31.281064  458353 config.go:182] Loaded profile config "no-preload-093313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:57:31.281180  458353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:57:31.303466  458353 main.go:141] libmachine: Using SSH client type: native
	I1025 10:57:31.303767  458353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1025 10:57:31.303787  458353 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:57:31.662087  458353 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:57:31.662112  458353 machine.go:96] duration metric: took 1.234503383s to provisionDockerMachine
	I1025 10:57:31.662122  458353 client.go:171] duration metric: took 4.154199937s to LocalClient.Create
	I1025 10:57:31.662135  458353 start.go:167] duration metric: took 4.15426155s to libmachine.API.Create "no-preload-093313"
	I1025 10:57:31.662143  458353 start.go:293] postStartSetup for "no-preload-093313" (driver="docker")
	I1025 10:57:31.662156  458353 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:57:31.662232  458353 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:57:31.662274  458353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:57:31.685652  458353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:57:31.794217  458353 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:57:31.798927  458353 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:57:31.798958  458353 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:57:31.798969  458353 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:57:31.799030  458353 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:57:31.799125  458353 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:57:31.799259  458353 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:57:31.807390  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:57:31.827171  458353 start.go:296] duration metric: took 165.011157ms for postStartSetup
	I1025 10:57:31.827548  458353 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-093313
	I1025 10:57:31.843969  458353 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/config.json ...
	I1025 10:57:31.844259  458353 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:57:31.844311  458353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:57:31.862993  458353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:57:31.966997  458353 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:57:31.971569  458353 start.go:128] duration metric: took 4.467687956s to createHost
	I1025 10:57:31.971595  458353 start.go:83] releasing machines lock for "no-preload-093313", held for 4.467807916s
	I1025 10:57:31.971669  458353 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-093313
	I1025 10:57:31.992126  458353 ssh_runner.go:195] Run: cat /version.json
	I1025 10:57:31.992167  458353 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:57:31.992180  458353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:57:31.992230  458353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:57:32.011435  458353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:57:32.023648  458353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:57:32.210715  458353 ssh_runner.go:195] Run: systemctl --version
	I1025 10:57:32.217197  458353 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:57:32.253752  458353 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:57:32.258198  458353 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:57:32.258266  458353 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:57:32.292323  458353 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 10:57:32.292356  458353 start.go:495] detecting cgroup driver to use...
	I1025 10:57:32.292390  458353 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:57:32.292455  458353 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:57:32.312284  458353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:57:32.325412  458353 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:57:32.325517  458353 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:57:32.341679  458353 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:57:32.361342  458353 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:57:32.494164  458353 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:57:32.618118  458353 docker.go:234] disabling docker service ...
	I1025 10:57:32.618185  458353 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:57:32.642262  458353 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:57:32.656773  458353 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:57:32.777089  458353 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:57:32.903271  458353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:57:32.919375  458353 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:57:32.935963  458353 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:57:32.936044  458353 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:57:32.946528  458353 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:57:32.946605  458353 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:57:32.957072  458353 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:57:32.967015  458353 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:57:32.975916  458353 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:57:32.983928  458353 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:57:32.992998  458353 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:57:33.011584  458353 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:57:33.021344  458353 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:57:33.029523  458353 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:57:33.037529  458353 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:57:33.151248  458353 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:57:33.281781  458353 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:57:33.281850  458353 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:57:33.286033  458353 start.go:563] Will wait 60s for crictl version
	I1025 10:57:33.286097  458353 ssh_runner.go:195] Run: which crictl
	I1025 10:57:33.289581  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:57:33.317230  458353 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:57:33.317317  458353 ssh_runner.go:195] Run: crio --version
	I1025 10:57:33.347371  458353 ssh_runner.go:195] Run: crio --version
	I1025 10:57:33.379140  458353 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1025 10:57:33.300821  454751 pod_ready.go:104] pod "coredns-66bc5c9577-sqrrf" is not "Ready", error: <nil>
	W1025 10:57:35.303297  454751 pod_ready.go:104] pod "coredns-66bc5c9577-sqrrf" is not "Ready", error: <nil>
	I1025 10:57:33.382012  458353 cli_runner.go:164] Run: docker network inspect no-preload-093313 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:57:33.399575  458353 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 10:57:33.403658  458353 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:57:33.415760  458353 kubeadm.go:883] updating cluster {Name:no-preload-093313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-093313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:57:33.415872  458353 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:57:33.415918  458353 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:57:33.442593  458353 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1025 10:57:33.442621  458353 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1025 10:57:33.442667  458353 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:57:33.442877  458353 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:57:33.442999  458353 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:57:33.443082  458353 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:57:33.443171  458353 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:57:33.443254  458353 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1025 10:57:33.443364  458353 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1025 10:57:33.443466  458353 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:57:33.444225  458353 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:57:33.444783  458353 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1025 10:57:33.444976  458353 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:57:33.445270  458353 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:57:33.445528  458353 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1025 10:57:33.445684  458353 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:57:33.445833  458353 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:57:33.445975  458353 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:57:33.661801  458353 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:57:33.664167  458353 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:57:33.671096  458353 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:57:33.674105  458353 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1025 10:57:33.685201  458353 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:57:33.687990  458353 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:57:33.695096  458353 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1025 10:57:33.768609  458353 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1025 10:57:33.768694  458353 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:57:33.768791  458353 ssh_runner.go:195] Run: which crictl
	I1025 10:57:33.784811  458353 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1025 10:57:33.784894  458353 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:57:33.784984  458353 ssh_runner.go:195] Run: which crictl
	I1025 10:57:33.831440  458353 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1025 10:57:33.831521  458353 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:57:33.831602  458353 ssh_runner.go:195] Run: which crictl
	I1025 10:57:33.848442  458353 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1025 10:57:33.848483  458353 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1025 10:57:33.848531  458353 ssh_runner.go:195] Run: which crictl
	I1025 10:57:33.848662  458353 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1025 10:57:33.848681  458353 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:57:33.848709  458353 ssh_runner.go:195] Run: which crictl
	I1025 10:57:33.848790  458353 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1025 10:57:33.848814  458353 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:57:33.848843  458353 ssh_runner.go:195] Run: which crictl
	I1025 10:57:33.848900  458353 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1025 10:57:33.848915  458353 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1025 10:57:33.848934  458353 ssh_runner.go:195] Run: which crictl
	I1025 10:57:33.849015  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:57:33.849085  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:57:33.849134  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:57:33.878985  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1025 10:57:33.879137  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:57:33.879176  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1025 10:57:33.881310  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:57:33.972460  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:57:33.972572  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:57:33.972649  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:57:33.987207  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:57:33.987383  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:57:33.987386  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1025 10:57:33.989807  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1025 10:57:34.075601  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:57:34.075775  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:57:34.075928  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:57:34.102459  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1025 10:57:34.102660  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:57:34.102729  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:57:34.102898  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1025 10:57:34.167174  458353 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1025 10:57:34.167347  458353 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1025 10:57:34.167488  458353 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1025 10:57:34.167599  458353 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1025 10:57:34.167716  458353 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1025 10:57:34.167776  458353 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1025 10:57:34.212533  458353 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1025 10:57:34.212633  458353 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1025 10:57:34.212698  458353 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1025 10:57:34.212750  458353 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1025 10:57:34.212801  458353 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1025 10:57:34.212847  458353 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1025 10:57:34.212901  458353 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1025 10:57:34.212948  458353 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1025 10:57:34.213011  458353 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1025 10:57:34.213026  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1025 10:57:34.213069  458353 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1025 10:57:34.213079  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1025 10:57:34.213116  458353 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1025 10:57:34.213127  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1025 10:57:34.260089  458353 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1025 10:57:34.260131  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1025 10:57:34.260198  458353 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1025 10:57:34.260216  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1025 10:57:34.260263  458353 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1025 10:57:34.260274  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1025 10:57:34.260314  458353 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1025 10:57:34.260325  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	W1025 10:57:34.306891  458353 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1025 10:57:34.306952  458353 retry.go:31] will retry after 281.990125ms: ssh: rejected: connect failed (open failed)
	I1025 10:57:34.437954  458353 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1025 10:57:34.438050  458353 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1025 10:57:34.438128  458353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:57:34.497127  458353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:57:34.589544  458353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:57:34.613300  458353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	W1025 10:57:34.886935  458353 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1025 10:57:34.887136  458353 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:57:34.963723  458353 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1025 10:57:34.963761  458353 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1025 10:57:34.963814  458353 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1025 10:57:35.038831  458353 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1025 10:57:35.038869  458353 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:57:35.038919  458353 ssh_runner.go:195] Run: which crictl
	I1025 10:57:36.942206  458353 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.97836345s)
	I1025 10:57:36.942233  458353 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1025 10:57:36.942256  458353 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1025 10:57:36.942297  458353 ssh_runner.go:235] Completed: which crictl: (1.903363158s)
	I1025 10:57:36.942394  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:57:36.942304  458353 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	W1025 10:57:37.801608  454751 pod_ready.go:104] pod "coredns-66bc5c9577-sqrrf" is not "Ready", error: <nil>
	W1025 10:57:39.801849  454751 pod_ready.go:104] pod "coredns-66bc5c9577-sqrrf" is not "Ready", error: <nil>
	I1025 10:57:38.473507  458353 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.531007488s)
	I1025 10:57:38.473538  458353 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1025 10:57:38.473557  458353 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1025 10:57:38.473588  458353 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.531108494s)
	I1025 10:57:38.473608  458353 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1025 10:57:38.473692  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:57:38.505084  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:57:39.829580  458353 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.355942628s)
	I1025 10:57:39.829612  458353 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1025 10:57:39.829621  458353 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.324505577s)
	I1025 10:57:39.829630  458353 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1025 10:57:39.829653  458353 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1025 10:57:39.829683  458353 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1025 10:57:39.829735  458353 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1025 10:57:41.246153  458353 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.416395957s)
	I1025 10:57:41.246189  458353 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1025 10:57:41.246225  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1025 10:57:41.246238  458353 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.416539203s)
	I1025 10:57:41.246254  458353 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1025 10:57:41.246271  458353 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1025 10:57:41.246312  458353 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	W1025 10:57:41.802393  454751 pod_ready.go:104] pod "coredns-66bc5c9577-sqrrf" is not "Ready", error: <nil>
	W1025 10:57:43.802593  454751 pod_ready.go:104] pod "coredns-66bc5c9577-sqrrf" is not "Ready", error: <nil>
	I1025 10:57:44.801654  454751 pod_ready.go:94] pod "coredns-66bc5c9577-sqrrf" is "Ready"
	I1025 10:57:44.801690  454751 pod_ready.go:86] duration metric: took 38.006051015s for pod "coredns-66bc5c9577-sqrrf" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:44.804461  454751 pod_ready.go:83] waiting for pod "etcd-embed-certs-348342" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:44.809433  454751 pod_ready.go:94] pod "etcd-embed-certs-348342" is "Ready"
	I1025 10:57:44.809463  454751 pod_ready.go:86] duration metric: took 4.971836ms for pod "etcd-embed-certs-348342" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:44.812336  454751 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-348342" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:44.817375  454751 pod_ready.go:94] pod "kube-apiserver-embed-certs-348342" is "Ready"
	I1025 10:57:44.817404  454751 pod_ready.go:86] duration metric: took 5.040439ms for pod "kube-apiserver-embed-certs-348342" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:44.819911  454751 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-348342" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:45.001226  454751 pod_ready.go:94] pod "kube-controller-manager-embed-certs-348342" is "Ready"
	I1025 10:57:45.001254  454751 pod_ready.go:86] duration metric: took 181.317288ms for pod "kube-controller-manager-embed-certs-348342" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:45.203229  454751 pod_ready.go:83] waiting for pod "kube-proxy-j9ngr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:45.601094  454751 pod_ready.go:94] pod "kube-proxy-j9ngr" is "Ready"
	I1025 10:57:45.601124  454751 pod_ready.go:86] duration metric: took 397.86349ms for pod "kube-proxy-j9ngr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:45.799535  454751 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-348342" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:46.200699  454751 pod_ready.go:94] pod "kube-scheduler-embed-certs-348342" is "Ready"
	I1025 10:57:46.200729  454751 pod_ready.go:86] duration metric: took 401.154408ms for pod "kube-scheduler-embed-certs-348342" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:46.200740  454751 pod_ready.go:40] duration metric: took 39.459471369s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:57:46.285035  454751 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:57:46.297862  454751 out.go:179] * Done! kubectl is now configured to use "embed-certs-348342" cluster and "default" namespace by default
	I1025 10:57:42.994157  458353 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.747822653s)
	I1025 10:57:42.994189  458353 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1025 10:57:42.994214  458353 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1025 10:57:42.994265  458353 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1025 10:57:47.233142  458353 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.238849838s)
	I1025 10:57:47.233169  458353 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1025 10:57:47.233188  458353 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1025 10:57:47.233242  458353 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1025 10:57:47.846175  458353 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1025 10:57:47.846215  458353 cache_images.go:124] Successfully loaded all cached images
	I1025 10:57:47.846221  458353 cache_images.go:93] duration metric: took 14.403586248s to LoadCachedImages
	I1025 10:57:47.846233  458353 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1025 10:57:47.846338  458353 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-093313 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-093313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:57:47.846438  458353 ssh_runner.go:195] Run: crio config
	I1025 10:57:47.915711  458353 cni.go:84] Creating CNI manager for ""
	I1025 10:57:47.915736  458353 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:57:47.915758  458353 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:57:47.915788  458353 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-093313 NodeName:no-preload-093313 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:57:47.915959  458353 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-093313"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:57:47.916054  458353 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:57:47.926054  458353 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1025 10:57:47.926131  458353 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1025 10:57:47.934514  458353 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1025 10:57:47.934678  458353 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1025 10:57:47.935231  458353 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1025 10:57:47.935246  458353 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1025 10:57:47.940005  458353 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1025 10:57:47.940046  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1025 10:57:48.624579  458353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:57:48.641344  458353 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1025 10:57:48.647659  458353 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1025 10:57:48.647742  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1025 10:57:48.823100  458353 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1025 10:57:48.838798  458353 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1025 10:57:48.838842  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1025 10:57:49.298843  458353 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:57:49.307701  458353 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 10:57:49.322575  458353 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:57:49.337157  458353 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1025 10:57:49.352212  458353 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:57:49.355928  458353 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:57:49.366536  458353 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:57:49.494836  458353 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:57:49.511200  458353 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313 for IP: 192.168.85.2
	I1025 10:57:49.511227  458353 certs.go:195] generating shared ca certs ...
	I1025 10:57:49.511245  458353 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:57:49.511393  458353 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:57:49.511444  458353 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:57:49.511456  458353 certs.go:257] generating profile certs ...
	I1025 10:57:49.511515  458353 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.key
	I1025 10:57:49.511533  458353 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.crt with IP's: []
	I1025 10:57:49.921606  458353 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.crt ...
	I1025 10:57:49.921640  458353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.crt: {Name:mka498e73d17603c69366bc81d183c3446d69f24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:57:49.921844  458353 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.key ...
	I1025 10:57:49.921859  458353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.key: {Name:mkaecbe7725a6928cd3905888c40f2281bbc8469 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:57:49.921954  458353 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.key.bf0f12ad
	I1025 10:57:49.921970  458353 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.crt.bf0f12ad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1025 10:57:50.030460  458353 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.crt.bf0f12ad ...
	I1025 10:57:50.030495  458353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.crt.bf0f12ad: {Name:mk18b59f4f7637f9c77d3f911f24dd6021c03ef8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:57:50.030688  458353 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.key.bf0f12ad ...
	I1025 10:57:50.030699  458353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.key.bf0f12ad: {Name:mk3a1a855460683d99627b6112aabbdd0deb59bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:57:50.030776  458353 certs.go:382] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.crt.bf0f12ad -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.crt
	I1025 10:57:50.030860  458353 certs.go:386] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.key.bf0f12ad -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.key
	I1025 10:57:50.030924  458353 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/proxy-client.key
	I1025 10:57:50.030949  458353 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/proxy-client.crt with IP's: []
	I1025 10:57:50.173557  458353 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/proxy-client.crt ...
	I1025 10:57:50.173601  458353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/proxy-client.crt: {Name:mkd9fc199c22a4ce62999321c0bc622710c23197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:57:50.173815  458353 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/proxy-client.key ...
	I1025 10:57:50.173832  458353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/proxy-client.key: {Name:mk59c61c585e86120ac2b64fcd17b5250f1be546 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:57:50.174079  458353 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:57:50.174130  458353 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:57:50.174152  458353 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:57:50.174182  458353 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:57:50.174214  458353 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:57:50.174242  458353 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:57:50.174294  458353 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:57:50.174909  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:57:50.196503  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:57:50.218403  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:57:50.236859  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:57:50.255441  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 10:57:50.273937  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:57:50.293589  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:57:50.312501  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 10:57:50.330205  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:57:50.348928  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:57:50.366737  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:57:50.384996  458353 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:57:50.407098  458353 ssh_runner.go:195] Run: openssl version
	I1025 10:57:50.417230  458353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:57:50.425700  458353 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:57:50.433289  458353 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:57:50.433359  458353 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:57:50.477528  458353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:57:50.486306  458353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:57:50.494672  458353 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:57:50.498367  458353 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:57:50.498475  458353 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:57:50.539453  458353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:57:50.548016  458353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:57:50.556500  458353 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:57:50.560228  458353 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:57:50.560337  458353 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:57:50.601298  458353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:57:50.609621  458353 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:57:50.613211  458353 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 10:57:50.613292  458353 kubeadm.go:400] StartCluster: {Name:no-preload-093313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-093313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:57:50.613375  458353 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:57:50.613434  458353 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:57:50.646038  458353 cri.go:89] found id: ""
	I1025 10:57:50.646112  458353 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:57:50.654192  458353 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:57:50.661866  458353 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 10:57:50.661955  458353 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:57:50.669776  458353 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 10:57:50.669798  458353 kubeadm.go:157] found existing configuration files:
	
	I1025 10:57:50.669860  458353 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 10:57:50.680229  458353 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 10:57:50.680305  458353 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:57:50.688681  458353 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 10:57:50.696527  458353 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 10:57:50.696642  458353 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:57:50.703999  458353 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 10:57:50.711889  458353 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 10:57:50.711967  458353 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:57:50.719829  458353 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 10:57:50.727534  458353 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 10:57:50.727603  458353 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:57:50.735766  458353 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 10:57:50.773041  458353 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 10:57:50.773302  458353 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:57:50.803160  458353 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 10:57:50.803276  458353 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 10:57:50.803342  458353 kubeadm.go:318] OS: Linux
	I1025 10:57:50.803414  458353 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 10:57:50.803485  458353 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 10:57:50.803560  458353 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 10:57:50.803631  458353 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 10:57:50.803706  458353 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 10:57:50.803791  458353 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 10:57:50.803875  458353 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 10:57:50.803984  458353 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 10:57:50.804063  458353 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 10:57:50.871767  458353 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:57:50.871934  458353 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:57:50.872070  458353 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 10:57:50.889440  458353 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 10:57:50.897114  458353 out.go:252]   - Generating certificates and keys ...
	I1025 10:57:50.897278  458353 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 10:57:50.897396  458353 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 10:57:50.977620  458353 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 10:57:51.184515  458353 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 10:57:52.539163  458353 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 10:57:52.855682  458353 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 10:57:53.120821  458353 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:57:53.121120  458353 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-093313] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 10:57:53.391662  458353 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:57:53.391908  458353 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-093313] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 10:57:53.767915  458353 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:57:54.080764  458353 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 10:57:54.752630  458353 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 10:57:54.752980  458353 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 10:57:55.125432  458353 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 10:57:56.042912  458353 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 10:57:56.238667  458353 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 10:57:56.777442  458353 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 10:57:57.636100  458353 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 10:57:57.636943  458353 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 10:57:57.642792  458353 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Oct 25 10:57:39 embed-certs-348342 crio[651]: time="2025-10-25T10:57:39.263207806Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:57:39 embed-certs-348342 crio[651]: time="2025-10-25T10:57:39.277919504Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:57:39 embed-certs-348342 crio[651]: time="2025-10-25T10:57:39.283417925Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:57:39 embed-certs-348342 crio[651]: time="2025-10-25T10:57:39.319122551Z" level=info msg="Created container 5a71a6b9c4cc471507afdaeb55835fbb3036fed68bb602a6960c9402df693838: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft6v5/dashboard-metrics-scraper" id=e9cd59c6-48f7-483f-bd4a-51e584c00991 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:57:39 embed-certs-348342 crio[651]: time="2025-10-25T10:57:39.330655808Z" level=info msg="Starting container: 5a71a6b9c4cc471507afdaeb55835fbb3036fed68bb602a6960c9402df693838" id=1a9afbca-cf64-40ef-8be0-d66353223b33 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:57:39 embed-certs-348342 crio[651]: time="2025-10-25T10:57:39.336603898Z" level=info msg="Started container" PID=1638 containerID=5a71a6b9c4cc471507afdaeb55835fbb3036fed68bb602a6960c9402df693838 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft6v5/dashboard-metrics-scraper id=1a9afbca-cf64-40ef-8be0-d66353223b33 name=/runtime.v1.RuntimeService/StartContainer sandboxID=de41e8f026b0bd11c035134ba8711f7deb7cae6c63afc760b019eb03ad830294
	Oct 25 10:57:39 embed-certs-348342 conmon[1636]: conmon 5a71a6b9c4cc471507af <ninfo>: container 1638 exited with status 1
	Oct 25 10:57:39 embed-certs-348342 crio[651]: time="2025-10-25T10:57:39.509362456Z" level=info msg="Removing container: bcc96a9e34717d4cd7b184b40557fe5f346df9e222ac7954ff2ecefed222f285" id=da649283-88d0-466f-9242-365fc680d706 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:57:39 embed-certs-348342 crio[651]: time="2025-10-25T10:57:39.52581442Z" level=info msg="Error loading conmon cgroup of container bcc96a9e34717d4cd7b184b40557fe5f346df9e222ac7954ff2ecefed222f285: cgroup deleted" id=da649283-88d0-466f-9242-365fc680d706 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:57:39 embed-certs-348342 crio[651]: time="2025-10-25T10:57:39.537565057Z" level=info msg="Removed container bcc96a9e34717d4cd7b184b40557fe5f346df9e222ac7954ff2ecefed222f285: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft6v5/dashboard-metrics-scraper" id=da649283-88d0-466f-9242-365fc680d706 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.115019663Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.12120129Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.1212407Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.12126292Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.127336747Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.127373137Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.127398015Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.131024903Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.131225471Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.131301837Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.134770816Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.134806607Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.134830107Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.138459916Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.138494788Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	5a71a6b9c4cc4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago       Exited              dashboard-metrics-scraper   2                   de41e8f026b0b       dashboard-metrics-scraper-6ffb444bf9-ft6v5   kubernetes-dashboard
	d5e3c73b3bb34       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   8c2f7b7532921       storage-provisioner                          kube-system
	b665fbb37c2b9       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   48 seconds ago       Running             kubernetes-dashboard        0                   ab1a32f8e4eb5       kubernetes-dashboard-855c9754f9-g46wr        kubernetes-dashboard
	1431eefc95167       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   fb18a798ee2de       coredns-66bc5c9577-sqrrf                     kube-system
	550e9323a161e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   19bae9c1afba1       busybox                                      default
	e62944b5dc101       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   b1b295cf593f1       kube-proxy-j9ngr                             kube-system
	3c115eaa48c2e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   fd86c4a270ce7       kindnet-q5mzm                                kube-system
	ec5a649a4f3ea       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   8c2f7b7532921       storage-provisioner                          kube-system
	4a176e83f0670       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   07f134b6910f9       kube-scheduler-embed-certs-348342            kube-system
	8fcdfc5fc2dc7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   03a6df5865a90       kube-apiserver-embed-certs-348342            kube-system
	c70dd3ad27c72       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   5624b7b544b9b       etcd-embed-certs-348342                      kube-system
	9e869b3a7afbb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   3ff9504082801       kube-controller-manager-embed-certs-348342   kube-system
	
	
	==> coredns [1431eefc9516720f0d87f27ee40753a17e6a3e1cdee8ecb4cadc2a143a7a7f26] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55993 - 25444 "HINFO IN 4078379182572549437.3236392264102063947. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012065176s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-348342
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-348342
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=embed-certs-348342
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_55_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:55:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-348342
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:57:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:57:56 +0000   Sat, 25 Oct 2025 10:55:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:57:56 +0000   Sat, 25 Oct 2025 10:55:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:57:56 +0000   Sat, 25 Oct 2025 10:55:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:57:56 +0000   Sat, 25 Oct 2025 10:56:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-348342
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                16712958-e8b7-42c4-971b-a9b56c3615de
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-sqrrf                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m21s
	  kube-system                 etcd-embed-certs-348342                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m26s
	  kube-system                 kindnet-q5mzm                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m21s
	  kube-system                 kube-apiserver-embed-certs-348342             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-controller-manager-embed-certs-348342    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-proxy-j9ngr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-scheduler-embed-certs-348342             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-ft6v5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-g46wr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m19s              kube-proxy       
	  Normal   Starting                 55s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m26s              kubelet          Node embed-certs-348342 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m26s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m26s              kubelet          Node embed-certs-348342 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m26s              kubelet          Node embed-certs-348342 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m26s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m22s              node-controller  Node embed-certs-348342 event: Registered Node embed-certs-348342 in Controller
	  Normal   NodeReady                100s               kubelet          Node embed-certs-348342 status is now: NodeReady
	  Normal   Starting                 63s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 63s)  kubelet          Node embed-certs-348342 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 63s)  kubelet          Node embed-certs-348342 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 63s)  kubelet          Node embed-certs-348342 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           54s                node-controller  Node embed-certs-348342 event: Registered Node embed-certs-348342 in Controller
	
	
	==> dmesg <==
	[Oct25 10:34] overlayfs: idmapped layers are currently not supported
	[Oct25 10:36] overlayfs: idmapped layers are currently not supported
	[ +23.146409] overlayfs: idmapped layers are currently not supported
	[ +12.090113] overlayfs: idmapped layers are currently not supported
	[Oct25 10:37] overlayfs: idmapped layers are currently not supported
	[ +24.992751] overlayfs: idmapped layers are currently not supported
	[Oct25 10:38] overlayfs: idmapped layers are currently not supported
	[Oct25 10:39] overlayfs: idmapped layers are currently not supported
	[Oct25 10:40] overlayfs: idmapped layers are currently not supported
	[Oct25 10:41] overlayfs: idmapped layers are currently not supported
	[Oct25 10:43] overlayfs: idmapped layers are currently not supported
	[Oct25 10:45] overlayfs: idmapped layers are currently not supported
	[ +32.163680] overlayfs: idmapped layers are currently not supported
	[Oct25 10:47] overlayfs: idmapped layers are currently not supported
	[Oct25 10:49] overlayfs: idmapped layers are currently not supported
	[Oct25 10:50] overlayfs: idmapped layers are currently not supported
	[Oct25 10:51] overlayfs: idmapped layers are currently not supported
	[  +5.236078] overlayfs: idmapped layers are currently not supported
	[Oct25 10:52] overlayfs: idmapped layers are currently not supported
	[Oct25 10:53] overlayfs: idmapped layers are currently not supported
	[Oct25 10:54] overlayfs: idmapped layers are currently not supported
	[Oct25 10:55] overlayfs: idmapped layers are currently not supported
	[Oct25 10:56] overlayfs: idmapped layers are currently not supported
	[ +41.501413] overlayfs: idmapped layers are currently not supported
	[Oct25 10:57] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c70dd3ad27c72e73d7f22a0f8ce5472875ecc49420f54d9480a48af44851b43d] <==
	{"level":"warn","ts":"2025-10-25T10:57:02.766444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:02.799894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:02.833668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:02.864080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:02.913423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:02.939026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:02.979541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.003840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.029420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.062904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.105442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.130679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.156297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.210169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.228875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.253612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.286182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.322137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.385439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.409503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.454460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.483876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.517070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.582593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.649476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33818","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:58:02 up  2:40,  0 user,  load average: 3.73, 3.49, 2.96
	Linux embed-certs-348342 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3c115eaa48c2ed4a4235288bce281b06608a49db9d4580641620a0c3eee76305] <==
	I1025 10:57:06.915550       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:57:06.915803       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 10:57:06.916010       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:57:06.916060       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:57:06.916099       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:57:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:57:07.115041       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:57:07.115109       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:57:07.115151       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:57:07.116049       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:57:37.115902       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 10:57:37.116126       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 10:57:37.116216       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 10:57:37.116294       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1025 10:57:38.615795       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:57:38.615915       1 metrics.go:72] Registering metrics
	I1025 10:57:38.616007       1 controller.go:711] "Syncing nftables rules"
	I1025 10:57:47.114565       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:57:47.114753       1 main.go:301] handling current node
	I1025 10:57:57.115180       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:57:57.115393       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8fcdfc5fc2dc75f67348b352c94dacbcef58121b8688bd5a6ea85732681228cd] <==
	I1025 10:57:05.300029       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 10:57:05.300622       1 aggregator.go:171] initial CRD sync complete...
	I1025 10:57:05.300647       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 10:57:05.300655       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:57:05.300661       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:57:05.309825       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 10:57:05.309860       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 10:57:05.310132       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:57:05.324267       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 10:57:05.324350       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 10:57:05.324364       1 policy_source.go:240] refreshing policies
	I1025 10:57:05.332116       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 10:57:05.374221       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1025 10:57:05.415177       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 10:57:05.863750       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:57:06.127986       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:57:06.203429       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:57:06.266047       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:57:06.364517       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:57:06.460762       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:57:06.653836       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.45.12"}
	I1025 10:57:06.672077       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.35.130"}
	I1025 10:57:08.408060       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:57:08.857676       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:57:08.907543       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9e869b3a7afbb096c23279c50a357f29f02843cd43be8ae3176e4dc15d9e713d] <==
	I1025 10:57:08.301586       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:57:08.301642       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 10:57:08.302891       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:57:08.303171       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 10:57:08.304653       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:57:08.307973       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 10:57:08.308081       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:57:08.311115       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 10:57:08.318478       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 10:57:08.331901       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 10:57:08.332024       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 10:57:08.332053       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 10:57:08.332059       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 10:57:08.332065       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 10:57:08.339546       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:57:08.354218       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:57:08.354437       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 10:57:08.354520       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 10:57:08.354613       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-348342"
	I1025 10:57:08.354665       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 10:57:08.355288       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:57:08.356930       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:57:08.356969       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:57:08.356999       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:57:08.357073       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-proxy [e62944b5dc1016625be50b0fd9819e27fccc5caa6393d6057a1e8c1b42dd6493] <==
	I1025 10:57:06.904520       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:57:07.011971       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:57:07.115441       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:57:07.115546       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 10:57:07.115782       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:57:07.230562       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:57:07.230635       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:57:07.234819       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:57:07.235148       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:57:07.235223       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:57:07.236725       1 config.go:200] "Starting service config controller"
	I1025 10:57:07.236744       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:57:07.236775       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:57:07.236780       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:57:07.236793       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:57:07.236797       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:57:07.237421       1 config.go:309] "Starting node config controller"
	I1025 10:57:07.237439       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:57:07.237445       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:57:07.337029       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:57:07.337041       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:57:07.337083       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4a176e83f06702f09feac763002a74b8b8a030874adc921f8bddd98aa3c974d4] <==
	I1025 10:57:02.494431       1 serving.go:386] Generated self-signed cert in-memory
	I1025 10:57:05.416093       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:57:05.416130       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:57:05.449094       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:57:05.449241       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:57:05.449420       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:57:05.449220       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1025 10:57:05.449976       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1025 10:57:05.449261       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:57:05.449277       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:57:05.463214       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:57:05.552709       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1025 10:57:05.552857       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:57:05.564877       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:57:08 embed-certs-348342 kubelet[777]: I1025 10:57:08.563217     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdsct\" (UniqueName: \"kubernetes.io/projected/aac12e07-8479-4b85-840a-a58bb745ba59-kube-api-access-qdsct\") pod \"dashboard-metrics-scraper-6ffb444bf9-ft6v5\" (UID: \"aac12e07-8479-4b85-840a-a58bb745ba59\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft6v5"
	Oct 25 10:57:08 embed-certs-348342 kubelet[777]: I1025 10:57:08.563278     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/aac12e07-8479-4b85-840a-a58bb745ba59-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-ft6v5\" (UID: \"aac12e07-8479-4b85-840a-a58bb745ba59\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft6v5"
	Oct 25 10:57:08 embed-certs-348342 kubelet[777]: I1025 10:57:08.563307     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krfg9\" (UniqueName: \"kubernetes.io/projected/80e0bcd4-8c33-402a-ae1a-b8fbcd2183cf-kube-api-access-krfg9\") pod \"kubernetes-dashboard-855c9754f9-g46wr\" (UID: \"80e0bcd4-8c33-402a-ae1a-b8fbcd2183cf\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-g46wr"
	Oct 25 10:57:08 embed-certs-348342 kubelet[777]: I1025 10:57:08.563329     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/80e0bcd4-8c33-402a-ae1a-b8fbcd2183cf-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-g46wr\" (UID: \"80e0bcd4-8c33-402a-ae1a-b8fbcd2183cf\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-g46wr"
	Oct 25 10:57:08 embed-certs-348342 kubelet[777]: W1025 10:57:08.831448     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4/crio-ab1a32f8e4eb5654189a592e5629b78838f9adb0db0cdc87553878a7eda79f69 WatchSource:0}: Error finding container ab1a32f8e4eb5654189a592e5629b78838f9adb0db0cdc87553878a7eda79f69: Status 404 returned error can't find the container with id ab1a32f8e4eb5654189a592e5629b78838f9adb0db0cdc87553878a7eda79f69
	Oct 25 10:57:08 embed-certs-348342 kubelet[777]: W1025 10:57:08.845212     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4/crio-de41e8f026b0bd11c035134ba8711f7deb7cae6c63afc760b019eb03ad830294 WatchSource:0}: Error finding container de41e8f026b0bd11c035134ba8711f7deb7cae6c63afc760b019eb03ad830294: Status 404 returned error can't find the container with id de41e8f026b0bd11c035134ba8711f7deb7cae6c63afc760b019eb03ad830294
	Oct 25 10:57:19 embed-certs-348342 kubelet[777]: I1025 10:57:19.446743     777 scope.go:117] "RemoveContainer" containerID="83f7d9d0413ccb3c532f6a156a2435ad58d8debab872b7c2e64e60888ba22d28"
	Oct 25 10:57:19 embed-certs-348342 kubelet[777]: I1025 10:57:19.500887     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-g46wr" podStartSLOduration=6.429830338 podStartE2EDuration="11.494335822s" podCreationTimestamp="2025-10-25 10:57:08 +0000 UTC" firstStartedPulling="2025-10-25 10:57:08.835489038 +0000 UTC m=+9.736494016" lastFinishedPulling="2025-10-25 10:57:13.89999453 +0000 UTC m=+14.800999500" observedRunningTime="2025-10-25 10:57:14.452426039 +0000 UTC m=+15.353431017" watchObservedRunningTime="2025-10-25 10:57:19.494335822 +0000 UTC m=+20.395340809"
	Oct 25 10:57:20 embed-certs-348342 kubelet[777]: I1025 10:57:20.451861     777 scope.go:117] "RemoveContainer" containerID="83f7d9d0413ccb3c532f6a156a2435ad58d8debab872b7c2e64e60888ba22d28"
	Oct 25 10:57:20 embed-certs-348342 kubelet[777]: I1025 10:57:20.452438     777 scope.go:117] "RemoveContainer" containerID="bcc96a9e34717d4cd7b184b40557fe5f346df9e222ac7954ff2ecefed222f285"
	Oct 25 10:57:20 embed-certs-348342 kubelet[777]: E1025 10:57:20.452938     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ft6v5_kubernetes-dashboard(aac12e07-8479-4b85-840a-a58bb745ba59)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft6v5" podUID="aac12e07-8479-4b85-840a-a58bb745ba59"
	Oct 25 10:57:21 embed-certs-348342 kubelet[777]: I1025 10:57:21.454903     777 scope.go:117] "RemoveContainer" containerID="bcc96a9e34717d4cd7b184b40557fe5f346df9e222ac7954ff2ecefed222f285"
	Oct 25 10:57:21 embed-certs-348342 kubelet[777]: E1025 10:57:21.455502     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ft6v5_kubernetes-dashboard(aac12e07-8479-4b85-840a-a58bb745ba59)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft6v5" podUID="aac12e07-8479-4b85-840a-a58bb745ba59"
	Oct 25 10:57:27 embed-certs-348342 kubelet[777]: I1025 10:57:27.013304     777 scope.go:117] "RemoveContainer" containerID="bcc96a9e34717d4cd7b184b40557fe5f346df9e222ac7954ff2ecefed222f285"
	Oct 25 10:57:27 embed-certs-348342 kubelet[777]: E1025 10:57:27.013514     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ft6v5_kubernetes-dashboard(aac12e07-8479-4b85-840a-a58bb745ba59)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft6v5" podUID="aac12e07-8479-4b85-840a-a58bb745ba59"
	Oct 25 10:57:37 embed-certs-348342 kubelet[777]: I1025 10:57:37.496115     777 scope.go:117] "RemoveContainer" containerID="ec5a649a4f3eaa7fedb4b62e1ed03f701beb70acb1dad6d653e9d16c77f9c2c0"
	Oct 25 10:57:39 embed-certs-348342 kubelet[777]: I1025 10:57:39.259139     777 scope.go:117] "RemoveContainer" containerID="bcc96a9e34717d4cd7b184b40557fe5f346df9e222ac7954ff2ecefed222f285"
	Oct 25 10:57:39 embed-certs-348342 kubelet[777]: I1025 10:57:39.505633     777 scope.go:117] "RemoveContainer" containerID="bcc96a9e34717d4cd7b184b40557fe5f346df9e222ac7954ff2ecefed222f285"
	Oct 25 10:57:39 embed-certs-348342 kubelet[777]: I1025 10:57:39.506324     777 scope.go:117] "RemoveContainer" containerID="5a71a6b9c4cc471507afdaeb55835fbb3036fed68bb602a6960c9402df693838"
	Oct 25 10:57:39 embed-certs-348342 kubelet[777]: E1025 10:57:39.506677     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ft6v5_kubernetes-dashboard(aac12e07-8479-4b85-840a-a58bb745ba59)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft6v5" podUID="aac12e07-8479-4b85-840a-a58bb745ba59"
	Oct 25 10:57:47 embed-certs-348342 kubelet[777]: I1025 10:57:47.012792     777 scope.go:117] "RemoveContainer" containerID="5a71a6b9c4cc471507afdaeb55835fbb3036fed68bb602a6960c9402df693838"
	Oct 25 10:57:47 embed-certs-348342 kubelet[777]: E1025 10:57:47.012999     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ft6v5_kubernetes-dashboard(aac12e07-8479-4b85-840a-a58bb745ba59)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft6v5" podUID="aac12e07-8479-4b85-840a-a58bb745ba59"
	Oct 25 10:57:58 embed-certs-348342 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:57:58 embed-certs-348342 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:57:58 embed-certs-348342 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [b665fbb37c2b93157ebbfdb2f5bf74ca890f415c87fe011f26d3fb206ab2b0a8] <==
	2025/10/25 10:57:13 Using namespace: kubernetes-dashboard
	2025/10/25 10:57:13 Using in-cluster config to connect to apiserver
	2025/10/25 10:57:13 Using secret token for csrf signing
	2025/10/25 10:57:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 10:57:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 10:57:13 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 10:57:13 Generating JWE encryption key
	2025/10/25 10:57:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 10:57:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 10:57:14 Initializing JWE encryption key from synchronized object
	2025/10/25 10:57:14 Creating in-cluster Sidecar client
	2025/10/25 10:57:14 Serving insecurely on HTTP port: 9090
	2025/10/25 10:57:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:57:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:57:13 Starting overwatch
	
	
	==> storage-provisioner [d5e3c73b3bb3432e2b9fbc1613b368968b855342702a5992c6d90219ffc7d2f4] <==
	I1025 10:57:37.596144       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:57:37.628811       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:57:37.629184       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 10:57:37.635056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:41.090771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:45.353327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:48.954054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:52.011599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:55.035767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:55.046121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:57:55.046291       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:57:55.049038       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-348342_d4dbc423-da8e-4efe-9a6c-8f6a1604cd57!
	I1025 10:57:55.052764       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7ec49e55-bd27-4484-99d7-316a9176b2fc", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-348342_d4dbc423-da8e-4efe-9a6c-8f6a1604cd57 became leader
	W1025 10:57:55.060260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:55.067668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:57:55.149849       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-348342_d4dbc423-da8e-4efe-9a6c-8f6a1604cd57!
	W1025 10:57:57.071546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:57.077708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:59.081973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:59.090451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:58:01.096153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:58:01.106627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ec5a649a4f3eaa7fedb4b62e1ed03f701beb70acb1dad6d653e9d16c77f9c2c0] <==
	I1025 10:57:06.771237       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 10:57:36.773402       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-348342 -n embed-certs-348342
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-348342 -n embed-certs-348342: exit status 2 (700.704248ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-348342 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-348342
helpers_test.go:243: (dbg) docker inspect embed-certs-348342:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4",
	        "Created": "2025-10-25T10:55:14.663333918Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 454884,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:56:51.874772685Z",
	            "FinishedAt": "2025-10-25T10:56:51.022204377Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4/hostname",
	        "HostsPath": "/var/lib/docker/containers/f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4/hosts",
	        "LogPath": "/var/lib/docker/containers/f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4/f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4-json.log",
	        "Name": "/embed-certs-348342",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-348342:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-348342",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4",
	                "LowerDir": "/var/lib/docker/overlay2/7af8a0a0e4548ff306a21c56011c7ef1e62940e78c923925b89499c6f933074a-init/diff:/var/lib/docker/overlay2/a746402fafa86965bd9394d930bb96ec16982037f9f5e7f4f1de6d09576ff851/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7af8a0a0e4548ff306a21c56011c7ef1e62940e78c923925b89499c6f933074a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7af8a0a0e4548ff306a21c56011c7ef1e62940e78c923925b89499c6f933074a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7af8a0a0e4548ff306a21c56011c7ef1e62940e78c923925b89499c6f933074a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-348342",
	                "Source": "/var/lib/docker/volumes/embed-certs-348342/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-348342",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-348342",
	                "name.minikube.sigs.k8s.io": "embed-certs-348342",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a71712838e7cfe072d0916f031751a5ebf1fdda02ee2ee24d555f4d6f99dc3e3",
	            "SandboxKey": "/var/run/docker/netns/a71712838e7c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-348342": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:16:a8:b6:48:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9165ff42962d9a3f99eefc8873610a74534a4c5300b06a1e9249fa26eacccff4",
	                    "EndpointID": "ab90b946da0d243aef1ee2036d3c376806832855307e0e2618a2e4c1ea4edf9b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-348342",
	                        "f2631e70db67"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-348342 -n embed-certs-348342
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-348342 -n embed-certs-348342: exit status 2 (504.995045ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-348342 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-348342 logs -n 25: (1.870450403s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-031983 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:53 UTC │ 25 Oct 25 10:54 UTC │
	│ image   │ old-k8s-version-031983 image list --format=json                                                                                                                                                                                               │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:54 UTC │
	│ pause   │ -p old-k8s-version-031983 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │                     │
	│ delete  │ -p old-k8s-version-031983                                                                                                                                                                                                                     │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:54 UTC │
	│ delete  │ -p old-k8s-version-031983                                                                                                                                                                                                                     │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:54 UTC │
	│ start   │ -p default-k8s-diff-port-223394 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:55 UTC │
	│ start   │ -p cert-expiration-736062 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-736062       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:55 UTC │
	│ delete  │ -p cert-expiration-736062                                                                                                                                                                                                                     │ cert-expiration-736062       │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │ 25 Oct 25 10:55 UTC │
	│ start   │ -p embed-certs-348342 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │ 25 Oct 25 10:56 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-223394 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-223394 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │ 25 Oct 25 10:56 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-223394 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:56 UTC │
	│ start   │ -p default-k8s-diff-port-223394 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-348342 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │                     │
	│ stop    │ -p embed-certs-348342 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-348342 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:56 UTC │
	│ start   │ -p embed-certs-348342 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:57 UTC │
	│ image   │ default-k8s-diff-port-223394 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ pause   │ -p default-k8s-diff-port-223394 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-223394                                                                                                                                                                                                               │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ delete  │ -p default-k8s-diff-port-223394                                                                                                                                                                                                               │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ delete  │ -p disable-driver-mounts-487220                                                                                                                                                                                                               │ disable-driver-mounts-487220 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ start   │ -p no-preload-093313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │                     │
	│ image   │ embed-certs-348342 image list --format=json                                                                                                                                                                                                   │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ pause   │ -p embed-certs-348342 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:57:27
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:57:27.266994  458353 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:57:27.267121  458353 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:57:27.267133  458353 out.go:374] Setting ErrFile to fd 2...
	I1025 10:57:27.267139  458353 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:57:27.267387  458353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:57:27.267802  458353 out.go:368] Setting JSON to false
	I1025 10:57:27.268769  458353 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9599,"bootTime":1761380249,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 10:57:27.268836  458353 start.go:141] virtualization:  
	I1025 10:57:27.272654  458353 out.go:179] * [no-preload-093313] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:57:27.276720  458353 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:57:27.276821  458353 notify.go:220] Checking for updates...
	I1025 10:57:27.282832  458353 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:57:27.285825  458353 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:57:27.288870  458353 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 10:57:27.291919  458353 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:57:27.294912  458353 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:57:27.298612  458353 config.go:182] Loaded profile config "embed-certs-348342": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:57:27.298761  458353 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:57:27.333894  458353 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:57:27.334043  458353 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:57:27.396018  458353 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:57:27.386534022 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:57:27.396127  458353 docker.go:318] overlay module found
	I1025 10:57:27.399328  458353 out.go:179] * Using the docker driver based on user configuration
	I1025 10:57:27.402249  458353 start.go:305] selected driver: docker
	I1025 10:57:27.402283  458353 start.go:925] validating driver "docker" against <nil>
	I1025 10:57:27.402297  458353 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:57:27.403081  458353 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:57:27.463467  458353 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:57:27.453826266 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:57:27.463628  458353 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 10:57:27.463872  458353 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:57:27.466740  458353 out.go:179] * Using Docker driver with root privileges
	I1025 10:57:27.470035  458353 cni.go:84] Creating CNI manager for ""
	I1025 10:57:27.470110  458353 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:57:27.470128  458353 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:57:27.470214  458353 start.go:349] cluster config:
	{Name:no-preload-093313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-093313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:57:27.473392  458353 out.go:179] * Starting "no-preload-093313" primary control-plane node in "no-preload-093313" cluster
	I1025 10:57:27.476214  458353 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:57:27.479350  458353 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:57:27.482984  458353 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:57:27.483084  458353 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:57:27.483188  458353 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/config.json ...
	I1025 10:57:27.483247  458353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/config.json: {Name:mkbe6200dcd9ee7626a1ef8f7eea52da25c61105 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:57:27.486335  458353 cache.go:107] acquiring lock: {Name:mke50a780b6f2fd20bf0f3807e5c55f2165bbc2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:57:27.486487  458353 cache.go:115] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 10:57:27.486503  458353 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.921421ms
	I1025 10:57:27.486570  458353 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 10:57:27.486597  458353 cache.go:107] acquiring lock: {Name:mk6e894f2fc5a822328f2889957353638b611d87 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:57:27.487374  458353 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:57:27.487723  458353 cache.go:107] acquiring lock: {Name:mk31460a278f5ce669dba0a3edc67dec38888d3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:57:27.487829  458353 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:57:27.488043  458353 cache.go:107] acquiring lock: {Name:mk4eab06b911708d94fc84824aa5eaf12c5f728f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:57:27.488168  458353 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:57:27.488419  458353 cache.go:107] acquiring lock: {Name:mk0fabb771ebb58b343ccbfcf727bcc4ba36d3bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:57:27.488589  458353 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:57:27.488890  458353 cache.go:107] acquiring lock: {Name:mk9b73d996269c05e36f39d743e660929113e3bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:57:27.488986  458353 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1025 10:57:27.489234  458353 cache.go:107] acquiring lock: {Name:mka63f62ad185c4a0c57416430877cf896f4796b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:57:27.489333  458353 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1025 10:57:27.489604  458353 cache.go:107] acquiring lock: {Name:mk3432d572d15dfd7f5ddfb6ca632d44b3f5c29a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:57:27.490393  458353 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:57:27.495484  458353 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:57:27.498083  458353 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:57:27.498301  458353 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:57:27.498548  458353 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:57:27.498747  458353 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1025 10:57:27.499648  458353 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:57:27.499833  458353 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1025 10:57:27.503605  458353 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:57:27.503625  458353 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:57:27.503639  458353 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:57:27.503673  458353 start.go:360] acquireMachinesLock for no-preload-093313: {Name:mk08df2ba22812bd327cf8f3a536e0d3054c6132 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:57:27.503778  458353 start.go:364] duration metric: took 89.904µs to acquireMachinesLock for "no-preload-093313"
	I1025 10:57:27.503803  458353 start.go:93] Provisioning new machine with config: &{Name:no-preload-093313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-093313 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:57:27.503867  458353 start.go:125] createHost starting for "" (driver="docker")
	W1025 10:57:28.303215  454751 pod_ready.go:104] pod "coredns-66bc5c9577-sqrrf" is not "Ready", error: <nil>
	W1025 10:57:30.802156  454751 pod_ready.go:104] pod "coredns-66bc5c9577-sqrrf" is not "Ready", error: <nil>
	I1025 10:57:27.507621  458353 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:57:27.507876  458353 start.go:159] libmachine.API.Create for "no-preload-093313" (driver="docker")
	I1025 10:57:27.507911  458353 client.go:168] LocalClient.Create starting
	I1025 10:57:27.508065  458353 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem
	I1025 10:57:27.508111  458353 main.go:141] libmachine: Decoding PEM data...
	I1025 10:57:27.508134  458353 main.go:141] libmachine: Parsing certificate...
	I1025 10:57:27.508196  458353 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem
	I1025 10:57:27.508220  458353 main.go:141] libmachine: Decoding PEM data...
	I1025 10:57:27.508235  458353 main.go:141] libmachine: Parsing certificate...
	I1025 10:57:27.508682  458353 cli_runner.go:164] Run: docker network inspect no-preload-093313 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:57:27.525520  458353 cli_runner.go:211] docker network inspect no-preload-093313 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:57:27.525611  458353 network_create.go:284] running [docker network inspect no-preload-093313] to gather additional debugging logs...
	I1025 10:57:27.525636  458353 cli_runner.go:164] Run: docker network inspect no-preload-093313
	W1025 10:57:27.547695  458353 cli_runner.go:211] docker network inspect no-preload-093313 returned with exit code 1
	I1025 10:57:27.547733  458353 network_create.go:287] error running [docker network inspect no-preload-093313]: docker network inspect no-preload-093313: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-093313 not found
	I1025 10:57:27.547747  458353 network_create.go:289] output of [docker network inspect no-preload-093313]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-093313 not found
	
	** /stderr **
	I1025 10:57:27.547842  458353 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:57:27.565835  458353 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2218a4d410c8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:56:a0:c3:54:c6:1f} reservation:<nil>}
	I1025 10:57:27.566348  458353 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-249eaf2d238d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:87:b9:4d:4c:0d} reservation:<nil>}
	I1025 10:57:27.566626  458353 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-210d4b236ff6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:d5:32:45:e6:85} reservation:<nil>}
	I1025 10:57:27.566942  458353 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-9165ff42962d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4a:94:0e:3f:4d:73} reservation:<nil>}
	I1025 10:57:27.567409  458353 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c2f170}
	I1025 10:57:27.567433  458353 network_create.go:124] attempt to create docker network no-preload-093313 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1025 10:57:27.567510  458353 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-093313 no-preload-093313
	I1025 10:57:27.625702  458353 network_create.go:108] docker network no-preload-093313 192.168.85.0/24 created
	I1025 10:57:27.625744  458353 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-093313" container
	I1025 10:57:27.625817  458353 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:57:27.651138  458353 cli_runner.go:164] Run: docker volume create no-preload-093313 --label name.minikube.sigs.k8s.io=no-preload-093313 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:57:27.675686  458353 oci.go:103] Successfully created a docker volume no-preload-093313
	I1025 10:57:27.675766  458353 cli_runner.go:164] Run: docker run --rm --name no-preload-093313-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-093313 --entrypoint /usr/bin/test -v no-preload-093313:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:57:27.815243  458353 cache.go:162] opening:  /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1025 10:57:27.835924  458353 cache.go:162] opening:  /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1025 10:57:27.840693  458353 cache.go:162] opening:  /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1025 10:57:27.845761  458353 cache.go:162] opening:  /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1025 10:57:27.857456  458353 cache.go:162] opening:  /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1025 10:57:27.859707  458353 cache.go:162] opening:  /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1025 10:57:27.867008  458353 cache.go:162] opening:  /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1025 10:57:27.922979  458353 cache.go:157] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1025 10:57:27.923030  458353 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 434.144085ms
	I1025 10:57:27.923045  458353 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1025 10:57:28.302176  458353 cache.go:157] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1025 10:57:28.302257  458353 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 813.839915ms
	I1025 10:57:28.302309  458353 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1025 10:57:28.347360  458353 oci.go:107] Successfully prepared a docker volume no-preload-093313
	I1025 10:57:28.347407  458353 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1025 10:57:28.347559  458353 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 10:57:28.347687  458353 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 10:57:28.407183  458353 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-093313 --name no-preload-093313 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-093313 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-093313 --network no-preload-093313 --ip 192.168.85.2 --volume no-preload-093313:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 10:57:28.851855  458353 cache.go:157] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1025 10:57:28.851884  458353 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.363841826s
	I1025 10:57:28.851896  458353 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1025 10:57:28.864131  458353 cache.go:157] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1025 10:57:28.867099  458353 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.379370089s
	I1025 10:57:28.867176  458353 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1025 10:57:28.887622  458353 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Running}}
	I1025 10:57:28.928670  458353 cache.go:157] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1025 10:57:28.928693  458353 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.439097284s
	I1025 10:57:28.928706  458353 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1025 10:57:28.938558  458353 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Status}}
	I1025 10:57:28.979382  458353 cli_runner.go:164] Run: docker exec no-preload-093313 stat /var/lib/dpkg/alternatives/iptables
	I1025 10:57:29.055045  458353 cache.go:157] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1025 10:57:29.055127  458353 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.568531939s
	I1025 10:57:29.055173  458353 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1025 10:57:29.082768  458353 oci.go:144] the created container "no-preload-093313" has a running status.
	I1025 10:57:29.082799  458353 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa...
	I1025 10:57:30.008547  458353 cache.go:157] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1025 10:57:30.008640  458353 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.519409679s
	I1025 10:57:30.008669  458353 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1025 10:57:30.008777  458353 cache.go:87] Successfully saved all images to host disk.
	I1025 10:57:30.304023  458353 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 10:57:30.328526  458353 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Status}}
	I1025 10:57:30.349289  458353 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 10:57:30.349314  458353 kic_runner.go:114] Args: [docker exec --privileged no-preload-093313 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 10:57:30.399003  458353 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Status}}
	I1025 10:57:30.427588  458353 machine.go:93] provisionDockerMachine start ...
	I1025 10:57:30.427684  458353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:57:30.453517  458353 main.go:141] libmachine: Using SSH client type: native
	I1025 10:57:30.453880  458353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1025 10:57:30.453892  458353 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:57:30.622636  458353 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-093313
	
	I1025 10:57:30.622724  458353 ubuntu.go:182] provisioning hostname "no-preload-093313"
	I1025 10:57:30.622839  458353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:57:30.660734  458353 main.go:141] libmachine: Using SSH client type: native
	I1025 10:57:30.661052  458353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1025 10:57:30.661064  458353 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-093313 && echo "no-preload-093313" | sudo tee /etc/hostname
	I1025 10:57:30.824759  458353 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-093313
	
	I1025 10:57:30.824840  458353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:57:30.843271  458353 main.go:141] libmachine: Using SSH client type: native
	I1025 10:57:30.843578  458353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1025 10:57:30.843603  458353 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-093313' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-093313/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-093313' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:57:30.994203  458353 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:57:30.994231  458353 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:57:30.994261  458353 ubuntu.go:190] setting up certificates
	I1025 10:57:30.994272  458353 provision.go:84] configureAuth start
	I1025 10:57:30.994337  458353 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-093313
	I1025 10:57:31.016332  458353 provision.go:143] copyHostCerts
	I1025 10:57:31.016409  458353 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:57:31.016422  458353 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:57:31.016506  458353 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:57:31.016609  458353 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:57:31.016621  458353 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:57:31.016649  458353 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:57:31.016707  458353 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:57:31.016718  458353 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:57:31.016743  458353 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:57:31.016797  458353 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.no-preload-093313 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-093313]
	I1025 10:57:31.101971  458353 provision.go:177] copyRemoteCerts
	I1025 10:57:31.102060  458353 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:57:31.102112  458353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:57:31.121094  458353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:57:31.226026  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:57:31.245034  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:57:31.262816  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:57:31.280790  458353 provision.go:87] duration metric: took 286.491926ms to configureAuth
	I1025 10:57:31.280860  458353 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:57:31.281064  458353 config.go:182] Loaded profile config "no-preload-093313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:57:31.281180  458353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:57:31.303466  458353 main.go:141] libmachine: Using SSH client type: native
	I1025 10:57:31.303767  458353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1025 10:57:31.303787  458353 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:57:31.662087  458353 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:57:31.662112  458353 machine.go:96] duration metric: took 1.234503383s to provisionDockerMachine
	I1025 10:57:31.662122  458353 client.go:171] duration metric: took 4.154199937s to LocalClient.Create
	I1025 10:57:31.662135  458353 start.go:167] duration metric: took 4.15426155s to libmachine.API.Create "no-preload-093313"
	I1025 10:57:31.662143  458353 start.go:293] postStartSetup for "no-preload-093313" (driver="docker")
	I1025 10:57:31.662156  458353 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:57:31.662232  458353 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:57:31.662274  458353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:57:31.685652  458353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:57:31.794217  458353 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:57:31.798927  458353 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:57:31.798958  458353 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:57:31.798969  458353 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:57:31.799030  458353 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:57:31.799125  458353 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:57:31.799259  458353 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:57:31.807390  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:57:31.827171  458353 start.go:296] duration metric: took 165.011157ms for postStartSetup
	I1025 10:57:31.827548  458353 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-093313
	I1025 10:57:31.843969  458353 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/config.json ...
	I1025 10:57:31.844259  458353 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:57:31.844311  458353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:57:31.862993  458353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:57:31.966997  458353 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:57:31.971569  458353 start.go:128] duration metric: took 4.467687956s to createHost
	I1025 10:57:31.971595  458353 start.go:83] releasing machines lock for "no-preload-093313", held for 4.467807916s
	I1025 10:57:31.971669  458353 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-093313
	I1025 10:57:31.992126  458353 ssh_runner.go:195] Run: cat /version.json
	I1025 10:57:31.992167  458353 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:57:31.992180  458353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:57:31.992230  458353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:57:32.011435  458353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:57:32.023648  458353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:57:32.210715  458353 ssh_runner.go:195] Run: systemctl --version
	I1025 10:57:32.217197  458353 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:57:32.253752  458353 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:57:32.258198  458353 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:57:32.258266  458353 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:57:32.292323  458353 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 10:57:32.292356  458353 start.go:495] detecting cgroup driver to use...
	I1025 10:57:32.292390  458353 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:57:32.292455  458353 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:57:32.312284  458353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:57:32.325412  458353 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:57:32.325517  458353 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:57:32.341679  458353 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:57:32.361342  458353 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:57:32.494164  458353 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:57:32.618118  458353 docker.go:234] disabling docker service ...
	I1025 10:57:32.618185  458353 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:57:32.642262  458353 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:57:32.656773  458353 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:57:32.777089  458353 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:57:32.903271  458353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:57:32.919375  458353 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:57:32.935963  458353 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:57:32.936044  458353 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:57:32.946528  458353 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:57:32.946605  458353 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:57:32.957072  458353 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:57:32.967015  458353 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:57:32.975916  458353 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:57:32.983928  458353 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:57:32.992998  458353 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:57:33.011584  458353 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:57:33.021344  458353 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:57:33.029523  458353 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:57:33.037529  458353 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:57:33.151248  458353 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:57:33.281781  458353 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:57:33.281850  458353 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:57:33.286033  458353 start.go:563] Will wait 60s for crictl version
	I1025 10:57:33.286097  458353 ssh_runner.go:195] Run: which crictl
	I1025 10:57:33.289581  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:57:33.317230  458353 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:57:33.317317  458353 ssh_runner.go:195] Run: crio --version
	I1025 10:57:33.347371  458353 ssh_runner.go:195] Run: crio --version
	I1025 10:57:33.379140  458353 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1025 10:57:33.300821  454751 pod_ready.go:104] pod "coredns-66bc5c9577-sqrrf" is not "Ready", error: <nil>
	W1025 10:57:35.303297  454751 pod_ready.go:104] pod "coredns-66bc5c9577-sqrrf" is not "Ready", error: <nil>
	I1025 10:57:33.382012  458353 cli_runner.go:164] Run: docker network inspect no-preload-093313 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:57:33.399575  458353 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 10:57:33.403658  458353 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:57:33.415760  458353 kubeadm.go:883] updating cluster {Name:no-preload-093313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-093313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:57:33.415872  458353 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:57:33.415918  458353 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:57:33.442593  458353 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1025 10:57:33.442621  458353 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1025 10:57:33.442667  458353 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:57:33.442877  458353 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:57:33.442999  458353 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:57:33.443082  458353 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:57:33.443171  458353 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:57:33.443254  458353 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1025 10:57:33.443364  458353 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1025 10:57:33.443466  458353 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:57:33.444225  458353 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:57:33.444783  458353 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1025 10:57:33.444976  458353 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:57:33.445270  458353 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:57:33.445528  458353 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1025 10:57:33.445684  458353 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:57:33.445833  458353 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:57:33.445975  458353 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:57:33.661801  458353 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:57:33.664167  458353 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:57:33.671096  458353 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:57:33.674105  458353 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1025 10:57:33.685201  458353 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:57:33.687990  458353 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:57:33.695096  458353 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1025 10:57:33.768609  458353 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1025 10:57:33.768694  458353 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:57:33.768791  458353 ssh_runner.go:195] Run: which crictl
	I1025 10:57:33.784811  458353 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1025 10:57:33.784894  458353 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:57:33.784984  458353 ssh_runner.go:195] Run: which crictl
	I1025 10:57:33.831440  458353 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1025 10:57:33.831521  458353 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:57:33.831602  458353 ssh_runner.go:195] Run: which crictl
	I1025 10:57:33.848442  458353 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1025 10:57:33.848483  458353 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1025 10:57:33.848531  458353 ssh_runner.go:195] Run: which crictl
	I1025 10:57:33.848662  458353 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1025 10:57:33.848681  458353 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:57:33.848709  458353 ssh_runner.go:195] Run: which crictl
	I1025 10:57:33.848790  458353 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1025 10:57:33.848814  458353 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:57:33.848843  458353 ssh_runner.go:195] Run: which crictl
	I1025 10:57:33.848900  458353 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1025 10:57:33.848915  458353 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1025 10:57:33.848934  458353 ssh_runner.go:195] Run: which crictl
	I1025 10:57:33.849015  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:57:33.849085  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:57:33.849134  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:57:33.878985  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1025 10:57:33.879137  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:57:33.879176  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1025 10:57:33.881310  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:57:33.972460  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:57:33.972572  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:57:33.972649  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:57:33.987207  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:57:33.987383  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:57:33.987386  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1025 10:57:33.989807  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1025 10:57:34.075601  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:57:34.075775  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:57:34.075928  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:57:34.102459  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1025 10:57:34.102660  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:57:34.102729  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:57:34.102898  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1025 10:57:34.167174  458353 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1025 10:57:34.167347  458353 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1025 10:57:34.167488  458353 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1025 10:57:34.167599  458353 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1025 10:57:34.167716  458353 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1025 10:57:34.167776  458353 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1025 10:57:34.212533  458353 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1025 10:57:34.212633  458353 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1025 10:57:34.212698  458353 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1025 10:57:34.212750  458353 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1025 10:57:34.212801  458353 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1025 10:57:34.212847  458353 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1025 10:57:34.212901  458353 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1025 10:57:34.212948  458353 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1025 10:57:34.213011  458353 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1025 10:57:34.213026  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1025 10:57:34.213069  458353 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1025 10:57:34.213079  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1025 10:57:34.213116  458353 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1025 10:57:34.213127  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1025 10:57:34.260089  458353 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1025 10:57:34.260131  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1025 10:57:34.260198  458353 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1025 10:57:34.260216  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1025 10:57:34.260263  458353 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1025 10:57:34.260274  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1025 10:57:34.260314  458353 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1025 10:57:34.260325  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	W1025 10:57:34.306891  458353 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1025 10:57:34.306952  458353 retry.go:31] will retry after 281.990125ms: ssh: rejected: connect failed (open failed)
	I1025 10:57:34.437954  458353 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1025 10:57:34.438050  458353 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1025 10:57:34.438128  458353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:57:34.497127  458353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:57:34.589544  458353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:57:34.613300  458353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	W1025 10:57:34.886935  458353 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1025 10:57:34.887136  458353 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:57:34.963723  458353 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1025 10:57:34.963761  458353 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1025 10:57:34.963814  458353 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1025 10:57:35.038831  458353 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1025 10:57:35.038869  458353 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:57:35.038919  458353 ssh_runner.go:195] Run: which crictl
	I1025 10:57:36.942206  458353 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.97836345s)
	I1025 10:57:36.942233  458353 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1025 10:57:36.942256  458353 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1025 10:57:36.942297  458353 ssh_runner.go:235] Completed: which crictl: (1.903363158s)
	I1025 10:57:36.942394  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:57:36.942304  458353 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	W1025 10:57:37.801608  454751 pod_ready.go:104] pod "coredns-66bc5c9577-sqrrf" is not "Ready", error: <nil>
	W1025 10:57:39.801849  454751 pod_ready.go:104] pod "coredns-66bc5c9577-sqrrf" is not "Ready", error: <nil>
	I1025 10:57:38.473507  458353 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.531007488s)
	I1025 10:57:38.473538  458353 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1025 10:57:38.473557  458353 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1025 10:57:38.473588  458353 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.531108494s)
	I1025 10:57:38.473608  458353 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1025 10:57:38.473692  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:57:38.505084  458353 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:57:39.829580  458353 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.355942628s)
	I1025 10:57:39.829612  458353 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1025 10:57:39.829621  458353 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.324505577s)
	I1025 10:57:39.829630  458353 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1025 10:57:39.829653  458353 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1025 10:57:39.829683  458353 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1025 10:57:39.829735  458353 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1025 10:57:41.246153  458353 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.416395957s)
	I1025 10:57:41.246189  458353 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1025 10:57:41.246225  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1025 10:57:41.246238  458353 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.416539203s)
	I1025 10:57:41.246254  458353 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1025 10:57:41.246271  458353 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1025 10:57:41.246312  458353 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	W1025 10:57:41.802393  454751 pod_ready.go:104] pod "coredns-66bc5c9577-sqrrf" is not "Ready", error: <nil>
	W1025 10:57:43.802593  454751 pod_ready.go:104] pod "coredns-66bc5c9577-sqrrf" is not "Ready", error: <nil>
	I1025 10:57:44.801654  454751 pod_ready.go:94] pod "coredns-66bc5c9577-sqrrf" is "Ready"
	I1025 10:57:44.801690  454751 pod_ready.go:86] duration metric: took 38.006051015s for pod "coredns-66bc5c9577-sqrrf" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:44.804461  454751 pod_ready.go:83] waiting for pod "etcd-embed-certs-348342" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:44.809433  454751 pod_ready.go:94] pod "etcd-embed-certs-348342" is "Ready"
	I1025 10:57:44.809463  454751 pod_ready.go:86] duration metric: took 4.971836ms for pod "etcd-embed-certs-348342" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:44.812336  454751 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-348342" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:44.817375  454751 pod_ready.go:94] pod "kube-apiserver-embed-certs-348342" is "Ready"
	I1025 10:57:44.817404  454751 pod_ready.go:86] duration metric: took 5.040439ms for pod "kube-apiserver-embed-certs-348342" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:44.819911  454751 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-348342" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:45.001226  454751 pod_ready.go:94] pod "kube-controller-manager-embed-certs-348342" is "Ready"
	I1025 10:57:45.001254  454751 pod_ready.go:86] duration metric: took 181.317288ms for pod "kube-controller-manager-embed-certs-348342" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:45.203229  454751 pod_ready.go:83] waiting for pod "kube-proxy-j9ngr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:45.601094  454751 pod_ready.go:94] pod "kube-proxy-j9ngr" is "Ready"
	I1025 10:57:45.601124  454751 pod_ready.go:86] duration metric: took 397.86349ms for pod "kube-proxy-j9ngr" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:45.799535  454751 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-348342" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:46.200699  454751 pod_ready.go:94] pod "kube-scheduler-embed-certs-348342" is "Ready"
	I1025 10:57:46.200729  454751 pod_ready.go:86] duration metric: took 401.154408ms for pod "kube-scheduler-embed-certs-348342" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:57:46.200740  454751 pod_ready.go:40] duration metric: took 39.459471369s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:57:46.285035  454751 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:57:46.297862  454751 out.go:179] * Done! kubectl is now configured to use "embed-certs-348342" cluster and "default" namespace by default
	I1025 10:57:42.994157  458353 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.747822653s)
	I1025 10:57:42.994189  458353 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1025 10:57:42.994214  458353 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1025 10:57:42.994265  458353 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1025 10:57:47.233142  458353 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.238849838s)
	I1025 10:57:47.233169  458353 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1025 10:57:47.233188  458353 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1025 10:57:47.233242  458353 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1025 10:57:47.846175  458353 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1025 10:57:47.846215  458353 cache_images.go:124] Successfully loaded all cached images
	I1025 10:57:47.846221  458353 cache_images.go:93] duration metric: took 14.403586248s to LoadCachedImages
	I1025 10:57:47.846233  458353 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1025 10:57:47.846338  458353 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-093313 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-093313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:57:47.846438  458353 ssh_runner.go:195] Run: crio config
	I1025 10:57:47.915711  458353 cni.go:84] Creating CNI manager for ""
	I1025 10:57:47.915736  458353 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:57:47.915758  458353 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:57:47.915788  458353 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-093313 NodeName:no-preload-093313 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:57:47.915959  458353 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-093313"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:57:47.916054  458353 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:57:47.926054  458353 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1025 10:57:47.926131  458353 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1025 10:57:47.934514  458353 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1025 10:57:47.934678  458353 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1025 10:57:47.935231  458353 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1025 10:57:47.935246  458353 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1025 10:57:47.940005  458353 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1025 10:57:47.940046  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1025 10:57:48.624579  458353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:57:48.641344  458353 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1025 10:57:48.647659  458353 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1025 10:57:48.647742  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1025 10:57:48.823100  458353 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1025 10:57:48.838798  458353 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1025 10:57:48.838842  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1025 10:57:49.298843  458353 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:57:49.307701  458353 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 10:57:49.322575  458353 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:57:49.337157  458353 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1025 10:57:49.352212  458353 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:57:49.355928  458353 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:57:49.366536  458353 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:57:49.494836  458353 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:57:49.511200  458353 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313 for IP: 192.168.85.2
	I1025 10:57:49.511227  458353 certs.go:195] generating shared ca certs ...
	I1025 10:57:49.511245  458353 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:57:49.511393  458353 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:57:49.511444  458353 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:57:49.511456  458353 certs.go:257] generating profile certs ...
	I1025 10:57:49.511515  458353 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.key
	I1025 10:57:49.511533  458353 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.crt with IP's: []
	I1025 10:57:49.921606  458353 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.crt ...
	I1025 10:57:49.921640  458353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.crt: {Name:mka498e73d17603c69366bc81d183c3446d69f24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:57:49.921844  458353 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.key ...
	I1025 10:57:49.921859  458353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.key: {Name:mkaecbe7725a6928cd3905888c40f2281bbc8469 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:57:49.921954  458353 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.key.bf0f12ad
	I1025 10:57:49.921970  458353 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.crt.bf0f12ad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1025 10:57:50.030460  458353 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.crt.bf0f12ad ...
	I1025 10:57:50.030495  458353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.crt.bf0f12ad: {Name:mk18b59f4f7637f9c77d3f911f24dd6021c03ef8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:57:50.030688  458353 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.key.bf0f12ad ...
	I1025 10:57:50.030699  458353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.key.bf0f12ad: {Name:mk3a1a855460683d99627b6112aabbdd0deb59bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:57:50.030776  458353 certs.go:382] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.crt.bf0f12ad -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.crt
	I1025 10:57:50.030860  458353 certs.go:386] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.key.bf0f12ad -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.key
	I1025 10:57:50.030924  458353 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/proxy-client.key
	I1025 10:57:50.030949  458353 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/proxy-client.crt with IP's: []
	I1025 10:57:50.173557  458353 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/proxy-client.crt ...
	I1025 10:57:50.173601  458353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/proxy-client.crt: {Name:mkd9fc199c22a4ce62999321c0bc622710c23197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:57:50.173815  458353 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/proxy-client.key ...
	I1025 10:57:50.173832  458353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/proxy-client.key: {Name:mk59c61c585e86120ac2b64fcd17b5250f1be546 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:57:50.174079  458353 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:57:50.174130  458353 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:57:50.174152  458353 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:57:50.174182  458353 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:57:50.174214  458353 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:57:50.174242  458353 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:57:50.174294  458353 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:57:50.174909  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:57:50.196503  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:57:50.218403  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:57:50.236859  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:57:50.255441  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 10:57:50.273937  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:57:50.293589  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:57:50.312501  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 10:57:50.330205  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:57:50.348928  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:57:50.366737  458353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:57:50.384996  458353 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:57:50.407098  458353 ssh_runner.go:195] Run: openssl version
	I1025 10:57:50.417230  458353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:57:50.425700  458353 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:57:50.433289  458353 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:57:50.433359  458353 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:57:50.477528  458353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:57:50.486306  458353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:57:50.494672  458353 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:57:50.498367  458353 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:57:50.498475  458353 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:57:50.539453  458353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:57:50.548016  458353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:57:50.556500  458353 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:57:50.560228  458353 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:57:50.560337  458353 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:57:50.601298  458353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:57:50.609621  458353 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:57:50.613211  458353 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 10:57:50.613292  458353 kubeadm.go:400] StartCluster: {Name:no-preload-093313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-093313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:57:50.613375  458353 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:57:50.613434  458353 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:57:50.646038  458353 cri.go:89] found id: ""
	I1025 10:57:50.646112  458353 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:57:50.654192  458353 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:57:50.661866  458353 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 10:57:50.661955  458353 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:57:50.669776  458353 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 10:57:50.669798  458353 kubeadm.go:157] found existing configuration files:
	
	I1025 10:57:50.669860  458353 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 10:57:50.680229  458353 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 10:57:50.680305  458353 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:57:50.688681  458353 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 10:57:50.696527  458353 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 10:57:50.696642  458353 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:57:50.703999  458353 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 10:57:50.711889  458353 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 10:57:50.711967  458353 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:57:50.719829  458353 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 10:57:50.727534  458353 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 10:57:50.727603  458353 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:57:50.735766  458353 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 10:57:50.773041  458353 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 10:57:50.773302  458353 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:57:50.803160  458353 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 10:57:50.803276  458353 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 10:57:50.803342  458353 kubeadm.go:318] OS: Linux
	I1025 10:57:50.803414  458353 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 10:57:50.803485  458353 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 10:57:50.803560  458353 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 10:57:50.803631  458353 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 10:57:50.803706  458353 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 10:57:50.803791  458353 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 10:57:50.803875  458353 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 10:57:50.803984  458353 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 10:57:50.804063  458353 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 10:57:50.871767  458353 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:57:50.871934  458353 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:57:50.872070  458353 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 10:57:50.889440  458353 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 10:57:50.897114  458353 out.go:252]   - Generating certificates and keys ...
	I1025 10:57:50.897278  458353 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 10:57:50.897396  458353 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 10:57:50.977620  458353 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 10:57:51.184515  458353 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 10:57:52.539163  458353 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 10:57:52.855682  458353 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 10:57:53.120821  458353 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:57:53.121120  458353 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-093313] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 10:57:53.391662  458353 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:57:53.391908  458353 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-093313] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 10:57:53.767915  458353 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:57:54.080764  458353 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 10:57:54.752630  458353 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 10:57:54.752980  458353 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 10:57:55.125432  458353 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 10:57:56.042912  458353 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 10:57:56.238667  458353 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 10:57:56.777442  458353 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 10:57:57.636100  458353 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 10:57:57.636943  458353 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 10:57:57.642792  458353 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 10:57:57.646127  458353 out.go:252]   - Booting up control plane ...
	I1025 10:57:57.646259  458353 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 10:57:57.646358  458353 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 10:57:57.646481  458353 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 10:57:57.666150  458353 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 10:57:57.666307  458353 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 10:57:57.684083  458353 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 10:57:57.684642  458353 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 10:57:57.684712  458353 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 10:57:57.835805  458353 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 10:57:57.837191  458353 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 10:57:59.346433  458353 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.508259844s
	I1025 10:57:59.347654  458353 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 10:57:59.347752  458353 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1025 10:57:59.347978  458353 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 10:57:59.348065  458353 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Oct 25 10:57:39 embed-certs-348342 crio[651]: time="2025-10-25T10:57:39.263207806Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:57:39 embed-certs-348342 crio[651]: time="2025-10-25T10:57:39.277919504Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:57:39 embed-certs-348342 crio[651]: time="2025-10-25T10:57:39.283417925Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:57:39 embed-certs-348342 crio[651]: time="2025-10-25T10:57:39.319122551Z" level=info msg="Created container 5a71a6b9c4cc471507afdaeb55835fbb3036fed68bb602a6960c9402df693838: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft6v5/dashboard-metrics-scraper" id=e9cd59c6-48f7-483f-bd4a-51e584c00991 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:57:39 embed-certs-348342 crio[651]: time="2025-10-25T10:57:39.330655808Z" level=info msg="Starting container: 5a71a6b9c4cc471507afdaeb55835fbb3036fed68bb602a6960c9402df693838" id=1a9afbca-cf64-40ef-8be0-d66353223b33 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:57:39 embed-certs-348342 crio[651]: time="2025-10-25T10:57:39.336603898Z" level=info msg="Started container" PID=1638 containerID=5a71a6b9c4cc471507afdaeb55835fbb3036fed68bb602a6960c9402df693838 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft6v5/dashboard-metrics-scraper id=1a9afbca-cf64-40ef-8be0-d66353223b33 name=/runtime.v1.RuntimeService/StartContainer sandboxID=de41e8f026b0bd11c035134ba8711f7deb7cae6c63afc760b019eb03ad830294
	Oct 25 10:57:39 embed-certs-348342 conmon[1636]: conmon 5a71a6b9c4cc471507af <ninfo>: container 1638 exited with status 1
	Oct 25 10:57:39 embed-certs-348342 crio[651]: time="2025-10-25T10:57:39.509362456Z" level=info msg="Removing container: bcc96a9e34717d4cd7b184b40557fe5f346df9e222ac7954ff2ecefed222f285" id=da649283-88d0-466f-9242-365fc680d706 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:57:39 embed-certs-348342 crio[651]: time="2025-10-25T10:57:39.52581442Z" level=info msg="Error loading conmon cgroup of container bcc96a9e34717d4cd7b184b40557fe5f346df9e222ac7954ff2ecefed222f285: cgroup deleted" id=da649283-88d0-466f-9242-365fc680d706 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:57:39 embed-certs-348342 crio[651]: time="2025-10-25T10:57:39.537565057Z" level=info msg="Removed container bcc96a9e34717d4cd7b184b40557fe5f346df9e222ac7954ff2ecefed222f285: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft6v5/dashboard-metrics-scraper" id=da649283-88d0-466f-9242-365fc680d706 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.115019663Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.12120129Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.1212407Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.12126292Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.127336747Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.127373137Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.127398015Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.131024903Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.131225471Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.131301837Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.134770816Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.134806607Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.134830107Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.138459916Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:57:47 embed-certs-348342 crio[651]: time="2025-10-25T10:57:47.138494788Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	5a71a6b9c4cc4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago       Exited              dashboard-metrics-scraper   2                   de41e8f026b0b       dashboard-metrics-scraper-6ffb444bf9-ft6v5   kubernetes-dashboard
	d5e3c73b3bb34       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           28 seconds ago       Running             storage-provisioner         2                   8c2f7b7532921       storage-provisioner                          kube-system
	b665fbb37c2b9       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   51 seconds ago       Running             kubernetes-dashboard        0                   ab1a32f8e4eb5       kubernetes-dashboard-855c9754f9-g46wr        kubernetes-dashboard
	1431eefc95167       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   fb18a798ee2de       coredns-66bc5c9577-sqrrf                     kube-system
	550e9323a161e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   19bae9c1afba1       busybox                                      default
	e62944b5dc101       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   b1b295cf593f1       kube-proxy-j9ngr                             kube-system
	3c115eaa48c2e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   fd86c4a270ce7       kindnet-q5mzm                                kube-system
	ec5a649a4f3ea       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   8c2f7b7532921       storage-provisioner                          kube-system
	4a176e83f0670       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   07f134b6910f9       kube-scheduler-embed-certs-348342            kube-system
	8fcdfc5fc2dc7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   03a6df5865a90       kube-apiserver-embed-certs-348342            kube-system
	c70dd3ad27c72       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   5624b7b544b9b       etcd-embed-certs-348342                      kube-system
	9e869b3a7afbb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   3ff9504082801       kube-controller-manager-embed-certs-348342   kube-system
	
	
	==> coredns [1431eefc9516720f0d87f27ee40753a17e6a3e1cdee8ecb4cadc2a143a7a7f26] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55993 - 25444 "HINFO IN 4078379182572549437.3236392264102063947. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012065176s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-348342
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-348342
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=embed-certs-348342
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_55_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:55:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-348342
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:57:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:57:56 +0000   Sat, 25 Oct 2025 10:55:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:57:56 +0000   Sat, 25 Oct 2025 10:55:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:57:56 +0000   Sat, 25 Oct 2025 10:55:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:57:56 +0000   Sat, 25 Oct 2025 10:56:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-348342
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                16712958-e8b7-42c4-971b-a9b56c3615de
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 coredns-66bc5c9577-sqrrf                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m24s
	  kube-system                 etcd-embed-certs-348342                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m29s
	  kube-system                 kindnet-q5mzm                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m24s
	  kube-system                 kube-apiserver-embed-certs-348342             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-controller-manager-embed-certs-348342    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-proxy-j9ngr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-scheduler-embed-certs-348342             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-ft6v5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-g46wr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m23s              kube-proxy       
	  Normal   Starting                 58s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m29s              kubelet          Node embed-certs-348342 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m29s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m29s              kubelet          Node embed-certs-348342 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m29s              kubelet          Node embed-certs-348342 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m29s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m25s              node-controller  Node embed-certs-348342 event: Registered Node embed-certs-348342 in Controller
	  Normal   NodeReady                103s               kubelet          Node embed-certs-348342 status is now: NodeReady
	  Normal   Starting                 66s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 66s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  66s (x8 over 66s)  kubelet          Node embed-certs-348342 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    66s (x8 over 66s)  kubelet          Node embed-certs-348342 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     66s (x8 over 66s)  kubelet          Node embed-certs-348342 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node embed-certs-348342 event: Registered Node embed-certs-348342 in Controller
	
	
	==> dmesg <==
	[Oct25 10:34] overlayfs: idmapped layers are currently not supported
	[Oct25 10:36] overlayfs: idmapped layers are currently not supported
	[ +23.146409] overlayfs: idmapped layers are currently not supported
	[ +12.090113] overlayfs: idmapped layers are currently not supported
	[Oct25 10:37] overlayfs: idmapped layers are currently not supported
	[ +24.992751] overlayfs: idmapped layers are currently not supported
	[Oct25 10:38] overlayfs: idmapped layers are currently not supported
	[Oct25 10:39] overlayfs: idmapped layers are currently not supported
	[Oct25 10:40] overlayfs: idmapped layers are currently not supported
	[Oct25 10:41] overlayfs: idmapped layers are currently not supported
	[Oct25 10:43] overlayfs: idmapped layers are currently not supported
	[Oct25 10:45] overlayfs: idmapped layers are currently not supported
	[ +32.163680] overlayfs: idmapped layers are currently not supported
	[Oct25 10:47] overlayfs: idmapped layers are currently not supported
	[Oct25 10:49] overlayfs: idmapped layers are currently not supported
	[Oct25 10:50] overlayfs: idmapped layers are currently not supported
	[Oct25 10:51] overlayfs: idmapped layers are currently not supported
	[  +5.236078] overlayfs: idmapped layers are currently not supported
	[Oct25 10:52] overlayfs: idmapped layers are currently not supported
	[Oct25 10:53] overlayfs: idmapped layers are currently not supported
	[Oct25 10:54] overlayfs: idmapped layers are currently not supported
	[Oct25 10:55] overlayfs: idmapped layers are currently not supported
	[Oct25 10:56] overlayfs: idmapped layers are currently not supported
	[ +41.501413] overlayfs: idmapped layers are currently not supported
	[Oct25 10:57] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c70dd3ad27c72e73d7f22a0f8ce5472875ecc49420f54d9480a48af44851b43d] <==
	{"level":"warn","ts":"2025-10-25T10:57:02.766444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:02.799894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:02.833668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:02.864080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:02.913423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:02.939026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:02.979541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.003840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.029420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.062904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.105442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.130679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.156297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.210169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.228875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.253612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.286182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.322137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.385439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.409503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.454460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.483876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.517070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.582593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:57:03.649476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33818","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:58:06 up  2:40,  0 user,  load average: 3.73, 3.49, 2.96
	Linux embed-certs-348342 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3c115eaa48c2ed4a4235288bce281b06608a49db9d4580641620a0c3eee76305] <==
	I1025 10:57:06.915550       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:57:06.915803       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 10:57:06.916010       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:57:06.916060       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:57:06.916099       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:57:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:57:07.115041       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:57:07.115109       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:57:07.115151       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:57:07.116049       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:57:37.115902       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 10:57:37.116126       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 10:57:37.116216       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 10:57:37.116294       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1025 10:57:38.615795       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:57:38.615915       1 metrics.go:72] Registering metrics
	I1025 10:57:38.616007       1 controller.go:711] "Syncing nftables rules"
	I1025 10:57:47.114565       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:57:47.114753       1 main.go:301] handling current node
	I1025 10:57:57.115180       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:57:57.115393       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8fcdfc5fc2dc75f67348b352c94dacbcef58121b8688bd5a6ea85732681228cd] <==
	I1025 10:57:05.300029       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 10:57:05.300622       1 aggregator.go:171] initial CRD sync complete...
	I1025 10:57:05.300647       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 10:57:05.300655       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:57:05.300661       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:57:05.309825       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 10:57:05.309860       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 10:57:05.310132       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:57:05.324267       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 10:57:05.324350       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 10:57:05.324364       1 policy_source.go:240] refreshing policies
	I1025 10:57:05.332116       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 10:57:05.374221       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1025 10:57:05.415177       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 10:57:05.863750       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:57:06.127986       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:57:06.203429       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:57:06.266047       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:57:06.364517       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:57:06.460762       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:57:06.653836       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.45.12"}
	I1025 10:57:06.672077       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.35.130"}
	I1025 10:57:08.408060       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:57:08.857676       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:57:08.907543       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9e869b3a7afbb096c23279c50a357f29f02843cd43be8ae3176e4dc15d9e713d] <==
	I1025 10:57:08.301586       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:57:08.301642       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 10:57:08.302891       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:57:08.303171       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 10:57:08.304653       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:57:08.307973       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 10:57:08.308081       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:57:08.311115       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 10:57:08.318478       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 10:57:08.331901       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 10:57:08.332024       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 10:57:08.332053       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 10:57:08.332059       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 10:57:08.332065       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 10:57:08.339546       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:57:08.354218       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:57:08.354437       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 10:57:08.354520       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 10:57:08.354613       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-348342"
	I1025 10:57:08.354665       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 10:57:08.355288       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:57:08.356930       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:57:08.356969       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:57:08.356999       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:57:08.357073       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-proxy [e62944b5dc1016625be50b0fd9819e27fccc5caa6393d6057a1e8c1b42dd6493] <==
	I1025 10:57:06.904520       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:57:07.011971       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:57:07.115441       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:57:07.115546       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 10:57:07.115782       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:57:07.230562       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:57:07.230635       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:57:07.234819       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:57:07.235148       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:57:07.235223       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:57:07.236725       1 config.go:200] "Starting service config controller"
	I1025 10:57:07.236744       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:57:07.236775       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:57:07.236780       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:57:07.236793       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:57:07.236797       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:57:07.237421       1 config.go:309] "Starting node config controller"
	I1025 10:57:07.237439       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:57:07.237445       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:57:07.337029       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:57:07.337041       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:57:07.337083       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4a176e83f06702f09feac763002a74b8b8a030874adc921f8bddd98aa3c974d4] <==
	I1025 10:57:02.494431       1 serving.go:386] Generated self-signed cert in-memory
	I1025 10:57:05.416093       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:57:05.416130       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:57:05.449094       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:57:05.449241       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:57:05.449420       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:57:05.449220       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1025 10:57:05.449976       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1025 10:57:05.449261       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:57:05.449277       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:57:05.463214       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:57:05.552709       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1025 10:57:05.552857       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:57:05.564877       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:57:08 embed-certs-348342 kubelet[777]: I1025 10:57:08.563217     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdsct\" (UniqueName: \"kubernetes.io/projected/aac12e07-8479-4b85-840a-a58bb745ba59-kube-api-access-qdsct\") pod \"dashboard-metrics-scraper-6ffb444bf9-ft6v5\" (UID: \"aac12e07-8479-4b85-840a-a58bb745ba59\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft6v5"
	Oct 25 10:57:08 embed-certs-348342 kubelet[777]: I1025 10:57:08.563278     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/aac12e07-8479-4b85-840a-a58bb745ba59-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-ft6v5\" (UID: \"aac12e07-8479-4b85-840a-a58bb745ba59\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft6v5"
	Oct 25 10:57:08 embed-certs-348342 kubelet[777]: I1025 10:57:08.563307     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krfg9\" (UniqueName: \"kubernetes.io/projected/80e0bcd4-8c33-402a-ae1a-b8fbcd2183cf-kube-api-access-krfg9\") pod \"kubernetes-dashboard-855c9754f9-g46wr\" (UID: \"80e0bcd4-8c33-402a-ae1a-b8fbcd2183cf\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-g46wr"
	Oct 25 10:57:08 embed-certs-348342 kubelet[777]: I1025 10:57:08.563329     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/80e0bcd4-8c33-402a-ae1a-b8fbcd2183cf-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-g46wr\" (UID: \"80e0bcd4-8c33-402a-ae1a-b8fbcd2183cf\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-g46wr"
	Oct 25 10:57:08 embed-certs-348342 kubelet[777]: W1025 10:57:08.831448     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4/crio-ab1a32f8e4eb5654189a592e5629b78838f9adb0db0cdc87553878a7eda79f69 WatchSource:0}: Error finding container ab1a32f8e4eb5654189a592e5629b78838f9adb0db0cdc87553878a7eda79f69: Status 404 returned error can't find the container with id ab1a32f8e4eb5654189a592e5629b78838f9adb0db0cdc87553878a7eda79f69
	Oct 25 10:57:08 embed-certs-348342 kubelet[777]: W1025 10:57:08.845212     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f2631e70db6711f4006897dadb529d53187261d470608d38fdc52fad80b234c4/crio-de41e8f026b0bd11c035134ba8711f7deb7cae6c63afc760b019eb03ad830294 WatchSource:0}: Error finding container de41e8f026b0bd11c035134ba8711f7deb7cae6c63afc760b019eb03ad830294: Status 404 returned error can't find the container with id de41e8f026b0bd11c035134ba8711f7deb7cae6c63afc760b019eb03ad830294
	Oct 25 10:57:19 embed-certs-348342 kubelet[777]: I1025 10:57:19.446743     777 scope.go:117] "RemoveContainer" containerID="83f7d9d0413ccb3c532f6a156a2435ad58d8debab872b7c2e64e60888ba22d28"
	Oct 25 10:57:19 embed-certs-348342 kubelet[777]: I1025 10:57:19.500887     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-g46wr" podStartSLOduration=6.429830338 podStartE2EDuration="11.494335822s" podCreationTimestamp="2025-10-25 10:57:08 +0000 UTC" firstStartedPulling="2025-10-25 10:57:08.835489038 +0000 UTC m=+9.736494016" lastFinishedPulling="2025-10-25 10:57:13.89999453 +0000 UTC m=+14.800999500" observedRunningTime="2025-10-25 10:57:14.452426039 +0000 UTC m=+15.353431017" watchObservedRunningTime="2025-10-25 10:57:19.494335822 +0000 UTC m=+20.395340809"
	Oct 25 10:57:20 embed-certs-348342 kubelet[777]: I1025 10:57:20.451861     777 scope.go:117] "RemoveContainer" containerID="83f7d9d0413ccb3c532f6a156a2435ad58d8debab872b7c2e64e60888ba22d28"
	Oct 25 10:57:20 embed-certs-348342 kubelet[777]: I1025 10:57:20.452438     777 scope.go:117] "RemoveContainer" containerID="bcc96a9e34717d4cd7b184b40557fe5f346df9e222ac7954ff2ecefed222f285"
	Oct 25 10:57:20 embed-certs-348342 kubelet[777]: E1025 10:57:20.452938     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ft6v5_kubernetes-dashboard(aac12e07-8479-4b85-840a-a58bb745ba59)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft6v5" podUID="aac12e07-8479-4b85-840a-a58bb745ba59"
	Oct 25 10:57:21 embed-certs-348342 kubelet[777]: I1025 10:57:21.454903     777 scope.go:117] "RemoveContainer" containerID="bcc96a9e34717d4cd7b184b40557fe5f346df9e222ac7954ff2ecefed222f285"
	Oct 25 10:57:21 embed-certs-348342 kubelet[777]: E1025 10:57:21.455502     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ft6v5_kubernetes-dashboard(aac12e07-8479-4b85-840a-a58bb745ba59)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft6v5" podUID="aac12e07-8479-4b85-840a-a58bb745ba59"
	Oct 25 10:57:27 embed-certs-348342 kubelet[777]: I1025 10:57:27.013304     777 scope.go:117] "RemoveContainer" containerID="bcc96a9e34717d4cd7b184b40557fe5f346df9e222ac7954ff2ecefed222f285"
	Oct 25 10:57:27 embed-certs-348342 kubelet[777]: E1025 10:57:27.013514     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ft6v5_kubernetes-dashboard(aac12e07-8479-4b85-840a-a58bb745ba59)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft6v5" podUID="aac12e07-8479-4b85-840a-a58bb745ba59"
	Oct 25 10:57:37 embed-certs-348342 kubelet[777]: I1025 10:57:37.496115     777 scope.go:117] "RemoveContainer" containerID="ec5a649a4f3eaa7fedb4b62e1ed03f701beb70acb1dad6d653e9d16c77f9c2c0"
	Oct 25 10:57:39 embed-certs-348342 kubelet[777]: I1025 10:57:39.259139     777 scope.go:117] "RemoveContainer" containerID="bcc96a9e34717d4cd7b184b40557fe5f346df9e222ac7954ff2ecefed222f285"
	Oct 25 10:57:39 embed-certs-348342 kubelet[777]: I1025 10:57:39.505633     777 scope.go:117] "RemoveContainer" containerID="bcc96a9e34717d4cd7b184b40557fe5f346df9e222ac7954ff2ecefed222f285"
	Oct 25 10:57:39 embed-certs-348342 kubelet[777]: I1025 10:57:39.506324     777 scope.go:117] "RemoveContainer" containerID="5a71a6b9c4cc471507afdaeb55835fbb3036fed68bb602a6960c9402df693838"
	Oct 25 10:57:39 embed-certs-348342 kubelet[777]: E1025 10:57:39.506677     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ft6v5_kubernetes-dashboard(aac12e07-8479-4b85-840a-a58bb745ba59)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft6v5" podUID="aac12e07-8479-4b85-840a-a58bb745ba59"
	Oct 25 10:57:47 embed-certs-348342 kubelet[777]: I1025 10:57:47.012792     777 scope.go:117] "RemoveContainer" containerID="5a71a6b9c4cc471507afdaeb55835fbb3036fed68bb602a6960c9402df693838"
	Oct 25 10:57:47 embed-certs-348342 kubelet[777]: E1025 10:57:47.012999     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ft6v5_kubernetes-dashboard(aac12e07-8479-4b85-840a-a58bb745ba59)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ft6v5" podUID="aac12e07-8479-4b85-840a-a58bb745ba59"
	Oct 25 10:57:58 embed-certs-348342 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:57:58 embed-certs-348342 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:57:58 embed-certs-348342 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [b665fbb37c2b93157ebbfdb2f5bf74ca890f415c87fe011f26d3fb206ab2b0a8] <==
	2025/10/25 10:57:13 Using namespace: kubernetes-dashboard
	2025/10/25 10:57:13 Using in-cluster config to connect to apiserver
	2025/10/25 10:57:13 Using secret token for csrf signing
	2025/10/25 10:57:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 10:57:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 10:57:13 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 10:57:13 Generating JWE encryption key
	2025/10/25 10:57:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 10:57:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 10:57:14 Initializing JWE encryption key from synchronized object
	2025/10/25 10:57:14 Creating in-cluster Sidecar client
	2025/10/25 10:57:14 Serving insecurely on HTTP port: 9090
	2025/10/25 10:57:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:57:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:57:13 Starting overwatch
	
	
	==> storage-provisioner [d5e3c73b3bb3432e2b9fbc1613b368968b855342702a5992c6d90219ffc7d2f4] <==
	I1025 10:57:37.628811       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:57:37.629184       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 10:57:37.635056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:41.090771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:45.353327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:48.954054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:52.011599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:55.035767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:55.046121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:57:55.046291       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:57:55.049038       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-348342_d4dbc423-da8e-4efe-9a6c-8f6a1604cd57!
	I1025 10:57:55.052764       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7ec49e55-bd27-4484-99d7-316a9176b2fc", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-348342_d4dbc423-da8e-4efe-9a6c-8f6a1604cd57 became leader
	W1025 10:57:55.060260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:55.067668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:57:55.149849       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-348342_d4dbc423-da8e-4efe-9a6c-8f6a1604cd57!
	W1025 10:57:57.071546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:57.077708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:59.081973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:57:59.090451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:58:01.096153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:58:01.106627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:58:03.115409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:58:03.128557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:58:05.132897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:58:05.144766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ec5a649a4f3eaa7fedb4b62e1ed03f701beb70acb1dad6d653e9d16c77f9c2c0] <==
	I1025 10:57:06.771237       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 10:57:36.773402       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-348342 -n embed-certs-348342
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-348342 -n embed-certs-348342: exit status 2 (446.880256ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-348342 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (8.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-093313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-093313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (407.487671ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:58:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-093313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-093313 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-093313 describe deploy/metrics-server -n kube-system: exit status 1 (129.27615ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-093313 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-093313
helpers_test.go:243: (dbg) docker inspect no-preload-093313:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b",
	        "Created": "2025-10-25T10:57:28.426935477Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 458662,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:57:28.492866584Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b/hostname",
	        "HostsPath": "/var/lib/docker/containers/6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b/hosts",
	        "LogPath": "/var/lib/docker/containers/6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b/6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b-json.log",
	        "Name": "/no-preload-093313",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-093313:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-093313",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b",
	                "LowerDir": "/var/lib/docker/overlay2/1f5ea6e91c8f355c29623f9f36931296945f0bb9f9437babb5fc7356e43ab032-init/diff:/var/lib/docker/overlay2/a746402fafa86965bd9394d930bb96ec16982037f9f5e7f4f1de6d09576ff851/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f5ea6e91c8f355c29623f9f36931296945f0bb9f9437babb5fc7356e43ab032/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f5ea6e91c8f355c29623f9f36931296945f0bb9f9437babb5fc7356e43ab032/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f5ea6e91c8f355c29623f9f36931296945f0bb9f9437babb5fc7356e43ab032/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-093313",
	                "Source": "/var/lib/docker/volumes/no-preload-093313/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-093313",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-093313",
	                "name.minikube.sigs.k8s.io": "no-preload-093313",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "21dd61484fa924ade18bdf0568f22b567d50ae46458841e2a42f8030089bf69e",
	            "SandboxKey": "/var/run/docker/netns/21dd61484fa9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-093313": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:5e:a3:46:56:f2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2d822b8f1fe897a1280d2399b042700d5489e4df686ead1ec0a23045fa9c8398",
	                    "EndpointID": "90f2d8ab8c8818e1e3cc2ed722aa28aa47560fdb78b7e690a3661d1fbbd43fc4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-093313",
	                        "6e8e2d881e7d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-093313 -n no-preload-093313
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-093313 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-093313 logs -n 25: (1.555177571s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-031983                                                                                                                                                                                                                     │ old-k8s-version-031983       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:54 UTC │
	│ start   │ -p default-k8s-diff-port-223394 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:55 UTC │
	│ start   │ -p cert-expiration-736062 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-736062       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:55 UTC │
	│ delete  │ -p cert-expiration-736062                                                                                                                                                                                                                     │ cert-expiration-736062       │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │ 25 Oct 25 10:55 UTC │
	│ start   │ -p embed-certs-348342 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │ 25 Oct 25 10:56 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-223394 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-223394 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │ 25 Oct 25 10:56 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-223394 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:56 UTC │
	│ start   │ -p default-k8s-diff-port-223394 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-348342 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │                     │
	│ stop    │ -p embed-certs-348342 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-348342 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:56 UTC │
	│ start   │ -p embed-certs-348342 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:57 UTC │
	│ image   │ default-k8s-diff-port-223394 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ pause   │ -p default-k8s-diff-port-223394 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-223394                                                                                                                                                                                                               │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ delete  │ -p default-k8s-diff-port-223394                                                                                                                                                                                                               │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ delete  │ -p disable-driver-mounts-487220                                                                                                                                                                                                               │ disable-driver-mounts-487220 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ start   │ -p no-preload-093313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:58 UTC │
	│ image   │ embed-certs-348342 image list --format=json                                                                                                                                                                                                   │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ pause   │ -p embed-certs-348342 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │                     │
	│ delete  │ -p embed-certs-348342                                                                                                                                                                                                                         │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ delete  │ -p embed-certs-348342                                                                                                                                                                                                                         │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ start   │ -p newest-cni-374679 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-093313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:58:10
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:58:10.782425  462579 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:58:10.782606  462579 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:58:10.782617  462579 out.go:374] Setting ErrFile to fd 2...
	I1025 10:58:10.782622  462579 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:58:10.782906  462579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:58:10.783330  462579 out.go:368] Setting JSON to false
	I1025 10:58:10.784320  462579 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9642,"bootTime":1761380249,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 10:58:10.784384  462579 start.go:141] virtualization:  
	I1025 10:58:10.790187  462579 out.go:179] * [newest-cni-374679] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:58:10.797531  462579 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:58:10.797608  462579 notify.go:220] Checking for updates...
	I1025 10:58:10.804696  462579 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:58:10.807606  462579 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:58:10.810593  462579 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 10:58:10.813557  462579 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:58:10.816529  462579 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:58:10.819983  462579 config.go:182] Loaded profile config "no-preload-093313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:58:10.820085  462579 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:58:10.862342  462579 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:58:10.862532  462579 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:58:10.964426  462579 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:58:10.954410639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:58:10.964531  462579 docker.go:318] overlay module found
	I1025 10:58:10.967638  462579 out.go:179] * Using the docker driver based on user configuration
	I1025 10:58:10.971968  462579 start.go:305] selected driver: docker
	I1025 10:58:10.971990  462579 start.go:925] validating driver "docker" against <nil>
	I1025 10:58:10.972003  462579 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:58:10.972733  462579 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:58:11.034078  462579 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:58:11.024060169 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:58:11.034246  462579 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1025 10:58:11.034276  462579 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1025 10:58:11.034517  462579 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 10:58:11.037415  462579 out.go:179] * Using Docker driver with root privileges
	I1025 10:58:11.040249  462579 cni.go:84] Creating CNI manager for ""
	I1025 10:58:11.040322  462579 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:58:11.040338  462579 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:58:11.040426  462579 start.go:349] cluster config:
	{Name:newest-cni-374679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-374679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:58:11.043617  462579 out.go:179] * Starting "newest-cni-374679" primary control-plane node in "newest-cni-374679" cluster
	I1025 10:58:11.046424  462579 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:58:11.049354  462579 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:58:11.052782  462579 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:58:11.052900  462579 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:58:11.053261  462579 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 10:58:11.053273  462579 cache.go:58] Caching tarball of preloaded images
	I1025 10:58:11.053356  462579 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:58:11.053366  462579 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:58:11.053487  462579 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/config.json ...
	I1025 10:58:11.053505  462579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/config.json: {Name:mk06bee1cbe95c7bc000c8c241bf490be28f8c58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:58:11.074565  462579 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:58:11.074589  462579 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:58:11.074603  462579 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:58:11.074641  462579 start.go:360] acquireMachinesLock for newest-cni-374679: {Name:mk7780b51c2c05e33336bc6c0b82ed21676e1544 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:58:11.074753  462579 start.go:364] duration metric: took 87.287µs to acquireMachinesLock for "newest-cni-374679"
	I1025 10:58:11.074823  462579 start.go:93] Provisioning new machine with config: &{Name:newest-cni-374679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-374679 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:58:11.074900  462579 start.go:125] createHost starting for "" (driver="docker")
	I1025 10:58:09.779161  458353 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 10:58:09.789684  458353 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 10:58:09.789720  458353 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 10:58:09.805146  458353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 10:58:10.383326  458353 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 10:58:10.383615  458353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:10.383677  458353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-093313 minikube.k8s.io/updated_at=2025_10_25T10_58_10_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689 minikube.k8s.io/name=no-preload-093313 minikube.k8s.io/primary=true
	I1025 10:58:10.454743  458353 ops.go:34] apiserver oom_adj: -16
	I1025 10:58:10.673829  458353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:11.174151  458353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:11.674123  458353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:12.174870  458353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:12.673935  458353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:13.174146  458353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:13.673911  458353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:14.174881  458353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:14.349095  458353 kubeadm.go:1113] duration metric: took 3.965535203s to wait for elevateKubeSystemPrivileges
	I1025 10:58:14.349124  458353 kubeadm.go:402] duration metric: took 23.73586495s to StartCluster
	I1025 10:58:14.349141  458353 settings.go:142] acquiring lock: {Name:mk78071aea90144fb2e8f63b90de4792d7a3522a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:58:14.349203  458353 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:58:14.349881  458353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:58:14.350121  458353 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:58:14.350251  458353 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 10:58:14.350510  458353 config.go:182] Loaded profile config "no-preload-093313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:58:14.350548  458353 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:58:14.350607  458353 addons.go:69] Setting storage-provisioner=true in profile "no-preload-093313"
	I1025 10:58:14.350621  458353 addons.go:238] Setting addon storage-provisioner=true in "no-preload-093313"
	I1025 10:58:14.350643  458353 host.go:66] Checking if "no-preload-093313" exists ...
	I1025 10:58:14.351140  458353 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Status}}
	I1025 10:58:14.351645  458353 addons.go:69] Setting default-storageclass=true in profile "no-preload-093313"
	I1025 10:58:14.351667  458353 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-093313"
	I1025 10:58:14.351934  458353 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Status}}
	I1025 10:58:14.353608  458353 out.go:179] * Verifying Kubernetes components...
	I1025 10:58:14.360245  458353 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:58:14.394551  458353 addons.go:238] Setting addon default-storageclass=true in "no-preload-093313"
	I1025 10:58:14.394591  458353 host.go:66] Checking if "no-preload-093313" exists ...
	I1025 10:58:14.395020  458353 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Status}}
	I1025 10:58:14.396324  458353 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:58:11.078392  462579 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:58:11.078654  462579 start.go:159] libmachine.API.Create for "newest-cni-374679" (driver="docker")
	I1025 10:58:11.078693  462579 client.go:168] LocalClient.Create starting
	I1025 10:58:11.078770  462579 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem
	I1025 10:58:11.078815  462579 main.go:141] libmachine: Decoding PEM data...
	I1025 10:58:11.078832  462579 main.go:141] libmachine: Parsing certificate...
	I1025 10:58:11.078885  462579 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem
	I1025 10:58:11.078911  462579 main.go:141] libmachine: Decoding PEM data...
	I1025 10:58:11.078930  462579 main.go:141] libmachine: Parsing certificate...
	I1025 10:58:11.079301  462579 cli_runner.go:164] Run: docker network inspect newest-cni-374679 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:58:11.096284  462579 cli_runner.go:211] docker network inspect newest-cni-374679 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:58:11.096377  462579 network_create.go:284] running [docker network inspect newest-cni-374679] to gather additional debugging logs...
	I1025 10:58:11.096400  462579 cli_runner.go:164] Run: docker network inspect newest-cni-374679
	W1025 10:58:11.120347  462579 cli_runner.go:211] docker network inspect newest-cni-374679 returned with exit code 1
	I1025 10:58:11.120391  462579 network_create.go:287] error running [docker network inspect newest-cni-374679]: docker network inspect newest-cni-374679: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-374679 not found
	I1025 10:58:11.120407  462579 network_create.go:289] output of [docker network inspect newest-cni-374679]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-374679 not found
	
	** /stderr **
	I1025 10:58:11.120504  462579 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:58:11.136990  462579 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2218a4d410c8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:56:a0:c3:54:c6:1f} reservation:<nil>}
	I1025 10:58:11.137518  462579 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-249eaf2d238d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:87:b9:4d:4c:0d} reservation:<nil>}
	I1025 10:58:11.137934  462579 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-210d4b236ff6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:d5:32:45:e6:85} reservation:<nil>}
	I1025 10:58:11.138642  462579 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019da0e0}
	I1025 10:58:11.138667  462579 network_create.go:124] attempt to create docker network newest-cni-374679 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 10:58:11.138732  462579 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-374679 newest-cni-374679
	I1025 10:58:11.217869  462579 network_create.go:108] docker network newest-cni-374679 192.168.76.0/24 created
	I1025 10:58:11.217906  462579 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-374679" container
	I1025 10:58:11.218089  462579 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:58:11.236221  462579 cli_runner.go:164] Run: docker volume create newest-cni-374679 --label name.minikube.sigs.k8s.io=newest-cni-374679 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:58:11.262082  462579 oci.go:103] Successfully created a docker volume newest-cni-374679
	I1025 10:58:11.262205  462579 cli_runner.go:164] Run: docker run --rm --name newest-cni-374679-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-374679 --entrypoint /usr/bin/test -v newest-cni-374679:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:58:11.833567  462579 oci.go:107] Successfully prepared a docker volume newest-cni-374679
	I1025 10:58:11.833628  462579 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:58:11.833647  462579 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 10:58:11.833717  462579 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-374679:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 10:58:14.399224  458353 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:58:14.399247  458353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:58:14.399311  458353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:58:14.429544  458353 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:58:14.429565  458353 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:58:14.429636  458353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:58:14.448016  458353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:58:14.472233  458353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:58:14.837926  458353 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:58:14.838104  458353 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 10:58:14.851705  458353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:58:14.939614  458353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:58:16.153416  458353 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.315225368s)
	I1025 10:58:16.153442  458353 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1025 10:58:16.154538  458353 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.316502508s)
	I1025 10:58:16.159219  458353 node_ready.go:35] waiting up to 6m0s for node "no-preload-093313" to be "Ready" ...
	I1025 10:58:16.671320  458353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.819582644s)
	I1025 10:58:16.671420  458353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.731785964s)
	I1025 10:58:16.689031  458353 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-093313" context rescaled to 1 replicas
	I1025 10:58:16.702345  458353 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 10:58:16.707329  458353 addons.go:514] duration metric: took 2.35675501s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 10:58:17.917384  462579 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-374679:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (6.08361938s)
	I1025 10:58:17.917428  462579 kic.go:203] duration metric: took 6.083776609s to extract preloaded images to volume ...
	W1025 10:58:17.917554  462579 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 10:58:17.917669  462579 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 10:58:18.012935  462579 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-374679 --name newest-cni-374679 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-374679 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-374679 --network newest-cni-374679 --ip 192.168.76.2 --volume newest-cni-374679:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 10:58:18.424295  462579 cli_runner.go:164] Run: docker container inspect newest-cni-374679 --format={{.State.Running}}
	I1025 10:58:18.453769  462579 cli_runner.go:164] Run: docker container inspect newest-cni-374679 --format={{.State.Status}}
	I1025 10:58:18.479989  462579 cli_runner.go:164] Run: docker exec newest-cni-374679 stat /var/lib/dpkg/alternatives/iptables
	I1025 10:58:18.548187  462579 oci.go:144] the created container "newest-cni-374679" has a running status.
	I1025 10:58:18.548223  462579 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa...
	I1025 10:58:19.383457  462579 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 10:58:19.405875  462579 cli_runner.go:164] Run: docker container inspect newest-cni-374679 --format={{.State.Status}}
	I1025 10:58:19.425688  462579 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 10:58:19.425708  462579 kic_runner.go:114] Args: [docker exec --privileged newest-cni-374679 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 10:58:19.479976  462579 cli_runner.go:164] Run: docker container inspect newest-cni-374679 --format={{.State.Status}}
	I1025 10:58:19.520158  462579 machine.go:93] provisionDockerMachine start ...
	I1025 10:58:19.520251  462579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:58:19.549763  462579 main.go:141] libmachine: Using SSH client type: native
	I1025 10:58:19.550157  462579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1025 10:58:19.550169  462579 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:58:19.552534  462579 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57564->127.0.0.1:33443: read: connection reset by peer
	W1025 10:58:18.165538  458353 node_ready.go:57] node "no-preload-093313" has "Ready":"False" status (will retry)
	W1025 10:58:20.663455  458353 node_ready.go:57] node "no-preload-093313" has "Ready":"False" status (will retry)
	I1025 10:58:22.709700  462579 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-374679
	
	I1025 10:58:22.709725  462579 ubuntu.go:182] provisioning hostname "newest-cni-374679"
	I1025 10:58:22.709796  462579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:58:22.726203  462579 main.go:141] libmachine: Using SSH client type: native
	I1025 10:58:22.726540  462579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1025 10:58:22.726560  462579 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-374679 && echo "newest-cni-374679" | sudo tee /etc/hostname
	I1025 10:58:22.887491  462579 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-374679
	
	I1025 10:58:22.887586  462579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:58:22.906626  462579 main.go:141] libmachine: Using SSH client type: native
	I1025 10:58:22.906939  462579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1025 10:58:22.906965  462579 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-374679' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-374679/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-374679' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:58:23.058779  462579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:58:23.058810  462579 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:58:23.058835  462579 ubuntu.go:190] setting up certificates
	I1025 10:58:23.058846  462579 provision.go:84] configureAuth start
	I1025 10:58:23.058911  462579 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-374679
	I1025 10:58:23.081396  462579 provision.go:143] copyHostCerts
	I1025 10:58:23.081474  462579 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:58:23.081484  462579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:58:23.081619  462579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:58:23.081779  462579 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:58:23.081787  462579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:58:23.081821  462579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:58:23.081884  462579 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:58:23.081889  462579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:58:23.081912  462579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:58:23.081972  462579 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.newest-cni-374679 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-374679]
	I1025 10:58:24.019205  462579 provision.go:177] copyRemoteCerts
	I1025 10:58:24.019281  462579 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:58:24.019331  462579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:58:24.037852  462579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:58:24.151428  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:58:24.172253  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:58:24.190664  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:58:24.210027  462579 provision.go:87] duration metric: took 1.151157349s to configureAuth
	I1025 10:58:24.210055  462579 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:58:24.210251  462579 config.go:182] Loaded profile config "newest-cni-374679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:58:24.210374  462579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:58:24.227593  462579 main.go:141] libmachine: Using SSH client type: native
	I1025 10:58:24.227917  462579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1025 10:58:24.227939  462579 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:58:24.519205  462579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:58:24.519230  462579 machine.go:96] duration metric: took 4.999052802s to provisionDockerMachine
	I1025 10:58:24.519241  462579 client.go:171] duration metric: took 13.440536059s to LocalClient.Create
	I1025 10:58:24.519255  462579 start.go:167] duration metric: took 13.440604153s to libmachine.API.Create "newest-cni-374679"
	I1025 10:58:24.519263  462579 start.go:293] postStartSetup for "newest-cni-374679" (driver="docker")
	I1025 10:58:24.519273  462579 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:58:24.519341  462579 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:58:24.519388  462579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:58:24.544709  462579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:58:24.650408  462579 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:58:24.655742  462579 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:58:24.655781  462579 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:58:24.655793  462579 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:58:24.655899  462579 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:58:24.656031  462579 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:58:24.656145  462579 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:58:24.665932  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:58:24.685599  462579 start.go:296] duration metric: took 166.321096ms for postStartSetup
	I1025 10:58:24.686160  462579 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-374679
	I1025 10:58:24.703228  462579 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/config.json ...
	I1025 10:58:24.703521  462579 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:58:24.703570  462579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:58:24.721678  462579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:58:24.823202  462579 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:58:24.827928  462579 start.go:128] duration metric: took 13.753011448s to createHost
	I1025 10:58:24.827955  462579 start.go:83] releasing machines lock for "newest-cni-374679", held for 13.753151625s
	I1025 10:58:24.828057  462579 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-374679
	I1025 10:58:24.844247  462579 ssh_runner.go:195] Run: cat /version.json
	I1025 10:58:24.844306  462579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:58:24.844567  462579 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:58:24.844632  462579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:58:24.862934  462579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:58:24.879396  462579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:58:24.974041  462579 ssh_runner.go:195] Run: systemctl --version
	I1025 10:58:25.077623  462579 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:58:25.117623  462579 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:58:25.122317  462579 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:58:25.122444  462579 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:58:25.152689  462579 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 10:58:25.152728  462579 start.go:495] detecting cgroup driver to use...
	I1025 10:58:25.152765  462579 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:58:25.152826  462579 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:58:25.172681  462579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:58:25.186727  462579 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:58:25.186803  462579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:58:25.205457  462579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:58:25.225572  462579 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:58:25.351681  462579 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:58:25.486668  462579 docker.go:234] disabling docker service ...
	I1025 10:58:25.486739  462579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:58:25.512544  462579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:58:25.529315  462579 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:58:25.663344  462579 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:58:25.787647  462579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:58:25.801143  462579 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:58:25.816415  462579 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:58:25.816486  462579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:58:25.826095  462579 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:58:25.826169  462579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:58:25.835772  462579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:58:25.846318  462579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:58:25.855407  462579 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:58:25.863923  462579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:58:25.873128  462579 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:58:25.888440  462579 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:58:25.897870  462579 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:58:25.906296  462579 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:58:25.914338  462579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:58:26.040855  462579 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:58:26.166487  462579 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:58:26.166560  462579 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:58:26.170685  462579 start.go:563] Will wait 60s for crictl version
	I1025 10:58:26.170877  462579 ssh_runner.go:195] Run: which crictl
	I1025 10:58:26.174971  462579 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:58:26.200539  462579 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:58:26.200635  462579 ssh_runner.go:195] Run: crio --version
	I1025 10:58:26.229645  462579 ssh_runner.go:195] Run: crio --version
	I1025 10:58:26.263789  462579 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:58:26.266758  462579 cli_runner.go:164] Run: docker network inspect newest-cni-374679 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:58:26.282561  462579 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 10:58:26.286421  462579 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:58:26.299427  462579 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1025 10:58:23.162887  458353 node_ready.go:57] node "no-preload-093313" has "Ready":"False" status (will retry)
	W1025 10:58:25.664785  458353 node_ready.go:57] node "no-preload-093313" has "Ready":"False" status (will retry)
	I1025 10:58:26.302256  462579 kubeadm.go:883] updating cluster {Name:newest-cni-374679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-374679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:58:26.302400  462579 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:58:26.302482  462579 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:58:26.346684  462579 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:58:26.346711  462579 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:58:26.346768  462579 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:58:26.373796  462579 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:58:26.373820  462579 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:58:26.373828  462579 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1025 10:58:26.373925  462579 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-374679 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-374679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:58:26.374039  462579 ssh_runner.go:195] Run: crio config
	I1025 10:58:26.425799  462579 cni.go:84] Creating CNI manager for ""
	I1025 10:58:26.425823  462579 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:58:26.425844  462579 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1025 10:58:26.425871  462579 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-374679 NodeName:newest-cni-374679 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:58:26.426079  462579 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-374679"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:58:26.426159  462579 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:58:26.433782  462579 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:58:26.433876  462579 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:58:26.441241  462579 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 10:58:26.453959  462579 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:58:26.467788  462579 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1025 10:58:26.486131  462579 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:58:26.489560  462579 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:58:26.499774  462579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:58:26.628120  462579 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:58:26.646609  462579 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679 for IP: 192.168.76.2
	I1025 10:58:26.646628  462579 certs.go:195] generating shared ca certs ...
	I1025 10:58:26.646644  462579 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:58:26.646797  462579 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:58:26.646848  462579 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:58:26.646861  462579 certs.go:257] generating profile certs ...
	I1025 10:58:26.646915  462579 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/client.key
	I1025 10:58:26.646932  462579 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/client.crt with IP's: []
	I1025 10:58:27.975991  462579 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/client.crt ...
	I1025 10:58:27.976029  462579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/client.crt: {Name:mk33f9548b2e8e050334262e4e13576b670afc14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:58:27.976235  462579 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/client.key ...
	I1025 10:58:27.976249  462579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/client.key: {Name:mk1237d6927ddef67436b0ac9efba3211b433c17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:58:27.976350  462579 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.key.de28dca6
	I1025 10:58:27.976367  462579 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.crt.de28dca6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1025 10:58:28.631503  462579 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.crt.de28dca6 ...
	I1025 10:58:28.631536  462579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.crt.de28dca6: {Name:mk18b35fb2c66fb75733fa3fccef46e3d42071f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:58:28.631730  462579 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.key.de28dca6 ...
	I1025 10:58:28.631745  462579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.key.de28dca6: {Name:mk8e642326cbbab18dd4eeab2907fcc966b9062e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:58:28.631833  462579 certs.go:382] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.crt.de28dca6 -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.crt
	I1025 10:58:28.631917  462579 certs.go:386] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.key.de28dca6 -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.key
	I1025 10:58:28.631983  462579 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/proxy-client.key
	I1025 10:58:28.632001  462579 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/proxy-client.crt with IP's: []
	I1025 10:58:30.131642  462579 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/proxy-client.crt ...
	I1025 10:58:30.131680  462579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/proxy-client.crt: {Name:mk95084b72f2abc40fa7e538044505840687a45f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:58:30.131908  462579 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/proxy-client.key ...
	I1025 10:58:30.131923  462579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/proxy-client.key: {Name:mk78a365eceeac82cee17c21ec1560ea43b277f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:58:30.132139  462579 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:58:30.132184  462579 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:58:30.132197  462579 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:58:30.132224  462579 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:58:30.132250  462579 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:58:30.132306  462579 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:58:30.132359  462579 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:58:30.133068  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:58:30.154948  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:58:30.179465  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:58:30.200201  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:58:30.220850  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 10:58:30.241805  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:58:30.263856  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:58:30.284447  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:58:30.303171  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:58:30.322948  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:58:30.341835  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:58:30.367717  462579 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:58:30.384756  462579 ssh_runner.go:195] Run: openssl version
	I1025 10:58:30.398708  462579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:58:30.435481  462579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:58:30.443778  462579 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:58:30.443864  462579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:58:30.498785  462579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:58:30.508375  462579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:58:30.517389  462579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:58:30.521155  462579 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:58:30.521239  462579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:58:30.562817  462579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:58:30.579656  462579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:58:30.588182  462579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:58:30.592003  462579 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:58:30.592082  462579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:58:30.633302  462579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:58:30.644308  462579 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:58:30.650275  462579 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 10:58:30.650343  462579 kubeadm.go:400] StartCluster: {Name:newest-cni-374679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-374679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:58:30.650433  462579 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:58:30.650493  462579 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:58:30.716801  462579 cri.go:89] found id: ""
	I1025 10:58:30.716917  462579 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:58:30.739257  462579 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:58:30.747646  462579 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 10:58:30.747717  462579 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:58:30.756236  462579 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 10:58:30.756254  462579 kubeadm.go:157] found existing configuration files:
	
	I1025 10:58:30.756306  462579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 10:58:30.766706  462579 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 10:58:30.766768  462579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:58:30.776732  462579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 10:58:30.791129  462579 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 10:58:30.791202  462579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:58:30.799420  462579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 10:58:30.810058  462579 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 10:58:30.810126  462579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:58:30.821501  462579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 10:58:30.835608  462579 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 10:58:30.835674  462579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:58:30.846611  462579 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 10:58:30.916189  462579 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 10:58:30.916582  462579 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:58:30.957151  462579 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 10:58:30.957323  462579 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 10:58:30.957405  462579 kubeadm.go:318] OS: Linux
	I1025 10:58:30.957479  462579 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 10:58:30.957564  462579 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 10:58:30.957646  462579 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 10:58:30.957728  462579 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 10:58:30.957817  462579 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 10:58:30.957928  462579 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 10:58:30.958030  462579 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 10:58:30.958135  462579 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 10:58:30.958229  462579 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 10:58:31.052897  462579 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:58:31.053087  462579 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:58:31.053221  462579 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 10:58:31.063147  462579 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1025 10:58:28.162660  458353 node_ready.go:57] node "no-preload-093313" has "Ready":"False" status (will retry)
	W1025 10:58:30.162818  458353 node_ready.go:57] node "no-preload-093313" has "Ready":"False" status (will retry)
	I1025 10:58:30.677420  458353 node_ready.go:49] node "no-preload-093313" is "Ready"
	I1025 10:58:30.677451  458353 node_ready.go:38] duration metric: took 14.5181592s for node "no-preload-093313" to be "Ready" ...
	I1025 10:58:30.677465  458353 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:58:30.677529  458353 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:58:30.693880  458353 api_server.go:72] duration metric: took 16.343726661s to wait for apiserver process to appear ...
	I1025 10:58:30.693903  458353 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:58:30.693923  458353 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 10:58:30.716726  458353 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 10:58:30.719040  458353 api_server.go:141] control plane version: v1.34.1
	I1025 10:58:30.719070  458353 api_server.go:131] duration metric: took 25.158864ms to wait for apiserver health ...
	I1025 10:58:30.719080  458353 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:58:30.725248  458353 system_pods.go:59] 8 kube-system pods found
	I1025 10:58:30.725291  458353 system_pods.go:61] "coredns-66bc5c9577-c56mp" [ee976d20-a036-4d38-ad57-a502bf3d0ff7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:58:30.725298  458353 system_pods.go:61] "etcd-no-preload-093313" [83fac023-8769-42f1-bb01-7b45b695a20f] Running
	I1025 10:58:30.725305  458353 system_pods.go:61] "kindnet-6tbtt" [9b74e355-e50d-43f8-94b8-43fdbad27e8d] Running
	I1025 10:58:30.725309  458353 system_pods.go:61] "kube-apiserver-no-preload-093313" [5b7a2f41-bfcc-4460-bc30-242d59d2cfa4] Running
	I1025 10:58:30.725315  458353 system_pods.go:61] "kube-controller-manager-no-preload-093313" [890c70e0-54a4-4423-bbd0-245fbbae3273] Running
	I1025 10:58:30.725319  458353 system_pods.go:61] "kube-proxy-vlb79" [9d2476c6-edbd-4e9b-9d21-4a5547f3cdbc] Running
	I1025 10:58:30.725324  458353 system_pods.go:61] "kube-scheduler-no-preload-093313" [6376071f-6220-481e-b3ec-fed60fe4f008] Running
	I1025 10:58:30.725330  458353 system_pods.go:61] "storage-provisioner" [335dab10-1baa-4bca-afa1-0ccae3bddad5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:58:30.725342  458353 system_pods.go:74] duration metric: took 6.255711ms to wait for pod list to return data ...
	I1025 10:58:30.725355  458353 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:58:30.728233  458353 default_sa.go:45] found service account: "default"
	I1025 10:58:30.728258  458353 default_sa.go:55] duration metric: took 2.896001ms for default service account to be created ...
	I1025 10:58:30.728268  458353 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:58:30.731868  458353 system_pods.go:86] 8 kube-system pods found
	I1025 10:58:30.731907  458353 system_pods.go:89] "coredns-66bc5c9577-c56mp" [ee976d20-a036-4d38-ad57-a502bf3d0ff7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:58:30.731913  458353 system_pods.go:89] "etcd-no-preload-093313" [83fac023-8769-42f1-bb01-7b45b695a20f] Running
	I1025 10:58:30.731921  458353 system_pods.go:89] "kindnet-6tbtt" [9b74e355-e50d-43f8-94b8-43fdbad27e8d] Running
	I1025 10:58:30.731926  458353 system_pods.go:89] "kube-apiserver-no-preload-093313" [5b7a2f41-bfcc-4460-bc30-242d59d2cfa4] Running
	I1025 10:58:30.731930  458353 system_pods.go:89] "kube-controller-manager-no-preload-093313" [890c70e0-54a4-4423-bbd0-245fbbae3273] Running
	I1025 10:58:30.731935  458353 system_pods.go:89] "kube-proxy-vlb79" [9d2476c6-edbd-4e9b-9d21-4a5547f3cdbc] Running
	I1025 10:58:30.731940  458353 system_pods.go:89] "kube-scheduler-no-preload-093313" [6376071f-6220-481e-b3ec-fed60fe4f008] Running
	I1025 10:58:30.731945  458353 system_pods.go:89] "storage-provisioner" [335dab10-1baa-4bca-afa1-0ccae3bddad5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:58:30.731961  458353 retry.go:31] will retry after 221.305441ms: missing components: kube-dns
	I1025 10:58:30.962307  458353 system_pods.go:86] 8 kube-system pods found
	I1025 10:58:30.962344  458353 system_pods.go:89] "coredns-66bc5c9577-c56mp" [ee976d20-a036-4d38-ad57-a502bf3d0ff7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:58:30.962351  458353 system_pods.go:89] "etcd-no-preload-093313" [83fac023-8769-42f1-bb01-7b45b695a20f] Running
	I1025 10:58:30.962357  458353 system_pods.go:89] "kindnet-6tbtt" [9b74e355-e50d-43f8-94b8-43fdbad27e8d] Running
	I1025 10:58:30.962371  458353 system_pods.go:89] "kube-apiserver-no-preload-093313" [5b7a2f41-bfcc-4460-bc30-242d59d2cfa4] Running
	I1025 10:58:30.962376  458353 system_pods.go:89] "kube-controller-manager-no-preload-093313" [890c70e0-54a4-4423-bbd0-245fbbae3273] Running
	I1025 10:58:30.962380  458353 system_pods.go:89] "kube-proxy-vlb79" [9d2476c6-edbd-4e9b-9d21-4a5547f3cdbc] Running
	I1025 10:58:30.962384  458353 system_pods.go:89] "kube-scheduler-no-preload-093313" [6376071f-6220-481e-b3ec-fed60fe4f008] Running
	I1025 10:58:30.962390  458353 system_pods.go:89] "storage-provisioner" [335dab10-1baa-4bca-afa1-0ccae3bddad5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:58:30.962403  458353 retry.go:31] will retry after 358.312048ms: missing components: kube-dns
	I1025 10:58:31.324534  458353 system_pods.go:86] 8 kube-system pods found
	I1025 10:58:31.324573  458353 system_pods.go:89] "coredns-66bc5c9577-c56mp" [ee976d20-a036-4d38-ad57-a502bf3d0ff7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:58:31.324580  458353 system_pods.go:89] "etcd-no-preload-093313" [83fac023-8769-42f1-bb01-7b45b695a20f] Running
	I1025 10:58:31.324586  458353 system_pods.go:89] "kindnet-6tbtt" [9b74e355-e50d-43f8-94b8-43fdbad27e8d] Running
	I1025 10:58:31.324590  458353 system_pods.go:89] "kube-apiserver-no-preload-093313" [5b7a2f41-bfcc-4460-bc30-242d59d2cfa4] Running
	I1025 10:58:31.324595  458353 system_pods.go:89] "kube-controller-manager-no-preload-093313" [890c70e0-54a4-4423-bbd0-245fbbae3273] Running
	I1025 10:58:31.324600  458353 system_pods.go:89] "kube-proxy-vlb79" [9d2476c6-edbd-4e9b-9d21-4a5547f3cdbc] Running
	I1025 10:58:31.324603  458353 system_pods.go:89] "kube-scheduler-no-preload-093313" [6376071f-6220-481e-b3ec-fed60fe4f008] Running
	I1025 10:58:31.324610  458353 system_pods.go:89] "storage-provisioner" [335dab10-1baa-4bca-afa1-0ccae3bddad5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:58:31.324626  458353 retry.go:31] will retry after 433.655988ms: missing components: kube-dns
	I1025 10:58:31.770412  458353 system_pods.go:86] 8 kube-system pods found
	I1025 10:58:31.770443  458353 system_pods.go:89] "coredns-66bc5c9577-c56mp" [ee976d20-a036-4d38-ad57-a502bf3d0ff7] Running
	I1025 10:58:31.770451  458353 system_pods.go:89] "etcd-no-preload-093313" [83fac023-8769-42f1-bb01-7b45b695a20f] Running
	I1025 10:58:31.770455  458353 system_pods.go:89] "kindnet-6tbtt" [9b74e355-e50d-43f8-94b8-43fdbad27e8d] Running
	I1025 10:58:31.770459  458353 system_pods.go:89] "kube-apiserver-no-preload-093313" [5b7a2f41-bfcc-4460-bc30-242d59d2cfa4] Running
	I1025 10:58:31.770465  458353 system_pods.go:89] "kube-controller-manager-no-preload-093313" [890c70e0-54a4-4423-bbd0-245fbbae3273] Running
	I1025 10:58:31.770469  458353 system_pods.go:89] "kube-proxy-vlb79" [9d2476c6-edbd-4e9b-9d21-4a5547f3cdbc] Running
	I1025 10:58:31.770474  458353 system_pods.go:89] "kube-scheduler-no-preload-093313" [6376071f-6220-481e-b3ec-fed60fe4f008] Running
	I1025 10:58:31.770479  458353 system_pods.go:89] "storage-provisioner" [335dab10-1baa-4bca-afa1-0ccae3bddad5] Running
	I1025 10:58:31.770487  458353 system_pods.go:126] duration metric: took 1.042212864s to wait for k8s-apps to be running ...
	I1025 10:58:31.770494  458353 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:58:31.770551  458353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:58:31.795164  458353 system_svc.go:56] duration metric: took 24.658823ms WaitForService to wait for kubelet
	I1025 10:58:31.795189  458353 kubeadm.go:586] duration metric: took 17.445041882s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:58:31.795208  458353 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:58:31.798443  458353 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:58:31.798512  458353 node_conditions.go:123] node cpu capacity is 2
	I1025 10:58:31.798541  458353 node_conditions.go:105] duration metric: took 3.326282ms to run NodePressure ...
	I1025 10:58:31.798568  458353 start.go:241] waiting for startup goroutines ...
	I1025 10:58:31.798600  458353 start.go:246] waiting for cluster config update ...
	I1025 10:58:31.798630  458353 start.go:255] writing updated cluster config ...
	I1025 10:58:31.798944  458353 ssh_runner.go:195] Run: rm -f paused
	I1025 10:58:31.803140  458353 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:58:31.808054  458353 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c56mp" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:58:31.817065  458353 pod_ready.go:94] pod "coredns-66bc5c9577-c56mp" is "Ready"
	I1025 10:58:31.817131  458353 pod_ready.go:86] duration metric: took 9.053792ms for pod "coredns-66bc5c9577-c56mp" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:58:31.820182  458353 pod_ready.go:83] waiting for pod "etcd-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:58:31.825970  458353 pod_ready.go:94] pod "etcd-no-preload-093313" is "Ready"
	I1025 10:58:31.826057  458353 pod_ready.go:86] duration metric: took 5.800585ms for pod "etcd-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:58:31.828967  458353 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:58:31.834607  458353 pod_ready.go:94] pod "kube-apiserver-no-preload-093313" is "Ready"
	I1025 10:58:31.834678  458353 pod_ready.go:86] duration metric: took 5.644695ms for pod "kube-apiserver-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:58:31.837396  458353 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:58:32.207392  458353 pod_ready.go:94] pod "kube-controller-manager-no-preload-093313" is "Ready"
	I1025 10:58:32.207421  458353 pod_ready.go:86] duration metric: took 369.960008ms for pod "kube-controller-manager-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:58:32.408362  458353 pod_ready.go:83] waiting for pod "kube-proxy-vlb79" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:58:32.807204  458353 pod_ready.go:94] pod "kube-proxy-vlb79" is "Ready"
	I1025 10:58:32.807235  458353 pod_ready.go:86] duration metric: took 398.838365ms for pod "kube-proxy-vlb79" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:58:33.008399  458353 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:58:33.408942  458353 pod_ready.go:94] pod "kube-scheduler-no-preload-093313" is "Ready"
	I1025 10:58:33.408969  458353 pod_ready.go:86] duration metric: took 400.540288ms for pod "kube-scheduler-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:58:33.408983  458353 pod_ready.go:40] duration metric: took 1.605815378s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:58:33.490261  458353 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:58:33.493448  458353 out.go:179] * Done! kubectl is now configured to use "no-preload-093313" cluster and "default" namespace by default
	I1025 10:58:31.066615  462579 out.go:252]   - Generating certificates and keys ...
	I1025 10:58:31.066767  462579 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 10:58:31.066848  462579 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 10:58:31.149130  462579 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 10:58:31.903265  462579 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 10:58:31.968055  462579 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 10:58:32.444010  462579 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 10:58:32.588572  462579 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:58:32.588743  462579 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-374679] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 10:58:32.782535  462579 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:58:32.782824  462579 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-374679] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 10:58:33.800991  462579 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:58:34.404644  462579 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 10:58:34.550327  462579 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 10:58:34.550899  462579 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 10:58:34.848915  462579 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 10:58:35.207722  462579 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 10:58:35.610639  462579 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 10:58:35.755903  462579 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 10:58:36.529889  462579 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 10:58:36.530612  462579 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 10:58:36.533215  462579 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 10:58:36.537093  462579 out.go:252]   - Booting up control plane ...
	I1025 10:58:36.537203  462579 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 10:58:36.537291  462579 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 10:58:36.538959  462579 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 10:58:36.554369  462579 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 10:58:36.554760  462579 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 10:58:36.563330  462579 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 10:58:36.563653  462579 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 10:58:36.563879  462579 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 10:58:36.721832  462579 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 10:58:36.721973  462579 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 10:58:37.724417  462579 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.002349577s
	I1025 10:58:37.729020  462579 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 10:58:37.729401  462579 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1025 10:58:37.729511  462579 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 10:58:37.729594  462579 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Oct 25 10:58:30 no-preload-093313 crio[838]: time="2025-10-25T10:58:30.865611862Z" level=info msg="Created container b6e57f0cd9701ce0633e072b608e12c46cac88497194c3b10d097b38a429b570: kube-system/coredns-66bc5c9577-c56mp/coredns" id=14f4cb7a-8c6d-4239-b796-62f1e7662c89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:58:30 no-preload-093313 crio[838]: time="2025-10-25T10:58:30.866596568Z" level=info msg="Starting container: b6e57f0cd9701ce0633e072b608e12c46cac88497194c3b10d097b38a429b570" id=3e3c634e-5cab-47dd-8cc3-8aa005987bf0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:58:30 no-preload-093313 crio[838]: time="2025-10-25T10:58:30.868366717Z" level=info msg="Started container" PID=2483 containerID=b6e57f0cd9701ce0633e072b608e12c46cac88497194c3b10d097b38a429b570 description=kube-system/coredns-66bc5c9577-c56mp/coredns id=3e3c634e-5cab-47dd-8cc3-8aa005987bf0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5c074a8e3297a3460f898dbc153e61b6b5a808f2aa9cdc13d4ff16946bfa1b89
	Oct 25 10:58:34 no-preload-093313 crio[838]: time="2025-10-25T10:58:34.103091704Z" level=info msg="Running pod sandbox: default/busybox/POD" id=3acab515-6654-4e1f-bdc6-323fc5c0aa68 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:58:34 no-preload-093313 crio[838]: time="2025-10-25T10:58:34.103165419Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:58:34 no-preload-093313 crio[838]: time="2025-10-25T10:58:34.114393369Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:741f89876095f95bb3a9c1f32a756a628587ae541533aa340c4a41d3b4f4737c UID:75418b38-6328-42b9-b710-7cee6dc929c2 NetNS:/var/run/netns/b2b7e34b-2ac8-4dec-bd95-71007b961fc5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000799d8}] Aliases:map[]}"
	Oct 25 10:58:34 no-preload-093313 crio[838]: time="2025-10-25T10:58:34.114445783Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 25 10:58:34 no-preload-093313 crio[838]: time="2025-10-25T10:58:34.13771499Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:741f89876095f95bb3a9c1f32a756a628587ae541533aa340c4a41d3b4f4737c UID:75418b38-6328-42b9-b710-7cee6dc929c2 NetNS:/var/run/netns/b2b7e34b-2ac8-4dec-bd95-71007b961fc5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000799d8}] Aliases:map[]}"
	Oct 25 10:58:34 no-preload-093313 crio[838]: time="2025-10-25T10:58:34.137956666Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 25 10:58:34 no-preload-093313 crio[838]: time="2025-10-25T10:58:34.147659129Z" level=info msg="Ran pod sandbox 741f89876095f95bb3a9c1f32a756a628587ae541533aa340c4a41d3b4f4737c with infra container: default/busybox/POD" id=3acab515-6654-4e1f-bdc6-323fc5c0aa68 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:58:34 no-preload-093313 crio[838]: time="2025-10-25T10:58:34.148887529Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cd5c69ed-d639-4570-89a5-65c7e420fd5f name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:58:34 no-preload-093313 crio[838]: time="2025-10-25T10:58:34.149136902Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=cd5c69ed-d639-4570-89a5-65c7e420fd5f name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:58:34 no-preload-093313 crio[838]: time="2025-10-25T10:58:34.149185165Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=cd5c69ed-d639-4570-89a5-65c7e420fd5f name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:58:34 no-preload-093313 crio[838]: time="2025-10-25T10:58:34.154984257Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9e5a2791-6383-4c54-93d6-4b8a9bdf05bf name=/runtime.v1.ImageService/PullImage
	Oct 25 10:58:34 no-preload-093313 crio[838]: time="2025-10-25T10:58:34.15827625Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 25 10:58:36 no-preload-093313 crio[838]: time="2025-10-25T10:58:36.17273958Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=9e5a2791-6383-4c54-93d6-4b8a9bdf05bf name=/runtime.v1.ImageService/PullImage
	Oct 25 10:58:36 no-preload-093313 crio[838]: time="2025-10-25T10:58:36.174545241Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e9f3fab6-096d-4cde-9553-5fd8013f0f15 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:58:36 no-preload-093313 crio[838]: time="2025-10-25T10:58:36.178503373Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=76ffe885-6fcf-419c-b6c8-d77c90e7d7d8 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:58:36 no-preload-093313 crio[838]: time="2025-10-25T10:58:36.187001607Z" level=info msg="Creating container: default/busybox/busybox" id=b3b29b84-2cd6-41fe-8cf4-b3eb3acfe7ea name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:58:36 no-preload-093313 crio[838]: time="2025-10-25T10:58:36.187379252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:58:36 no-preload-093313 crio[838]: time="2025-10-25T10:58:36.194072514Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:58:36 no-preload-093313 crio[838]: time="2025-10-25T10:58:36.194697135Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:58:36 no-preload-093313 crio[838]: time="2025-10-25T10:58:36.219728893Z" level=info msg="Created container d5aa1468163135e3e3ee3ac85731223a72fcfe1ed57d45ef5a9f7b9a4846e54d: default/busybox/busybox" id=b3b29b84-2cd6-41fe-8cf4-b3eb3acfe7ea name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:58:36 no-preload-093313 crio[838]: time="2025-10-25T10:58:36.223083647Z" level=info msg="Starting container: d5aa1468163135e3e3ee3ac85731223a72fcfe1ed57d45ef5a9f7b9a4846e54d" id=cfe5e037-97f0-494f-8b82-3c6276a57bec name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:58:36 no-preload-093313 crio[838]: time="2025-10-25T10:58:36.22807648Z" level=info msg="Started container" PID=2538 containerID=d5aa1468163135e3e3ee3ac85731223a72fcfe1ed57d45ef5a9f7b9a4846e54d description=default/busybox/busybox id=cfe5e037-97f0-494f-8b82-3c6276a57bec name=/runtime.v1.RuntimeService/StartContainer sandboxID=741f89876095f95bb3a9c1f32a756a628587ae541533aa340c4a41d3b4f4737c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d5aa146816313       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   741f89876095f       busybox                                     default
	b6e57f0cd9701       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago      Running             coredns                   0                   5c074a8e3297a       coredns-66bc5c9577-c56mp                    kube-system
	79aa9c08469e0       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      13 seconds ago      Running             storage-provisioner       0                   bb2d418885ffb       storage-provisioner                         kube-system
	c0b7bd294217e       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   bd6c8a36a4557       kindnet-6tbtt                               kube-system
	23f47475f8ce2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      28 seconds ago      Running             kube-proxy                0                   d13c74423f574       kube-proxy-vlb79                            kube-system
	db46c295e42c5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      44 seconds ago      Running             etcd                      0                   fa976969e4d65       etcd-no-preload-093313                      kube-system
	0a6d604d1e96d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      44 seconds ago      Running             kube-apiserver            0                   c88454a1384f3       kube-apiserver-no-preload-093313            kube-system
	43dee31e8cf66       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      44 seconds ago      Running             kube-controller-manager   0                   8076c3afb8097       kube-controller-manager-no-preload-093313   kube-system
	4025a23310af7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      44 seconds ago      Running             kube-scheduler            0                   abc0fb7a7848e       kube-scheduler-no-preload-093313            kube-system
	
	
	==> coredns [b6e57f0cd9701ce0633e072b608e12c46cac88497194c3b10d097b38a429b570] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48719 - 9117 "HINFO IN 4982782504642112398.8948278699806638029. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.035256842s
	
	
	==> describe nodes <==
	Name:               no-preload-093313
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-093313
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=no-preload-093313
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_58_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:58:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-093313
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:58:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:58:40 +0000   Sat, 25 Oct 2025 10:58:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:58:40 +0000   Sat, 25 Oct 2025 10:58:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:58:40 +0000   Sat, 25 Oct 2025 10:58:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:58:40 +0000   Sat, 25 Oct 2025 10:58:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-093313
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                03f9066b-feaa-4e69-be40-1b2314524518
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-c56mp                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-no-preload-093313                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-6tbtt                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-093313             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-no-preload-093313    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-vlb79                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-093313             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Warning  CgroupV1                 45s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node no-preload-093313 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node no-preload-093313 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     45s (x8 over 45s)  kubelet          Node no-preload-093313 status is now: NodeHasSufficientPID
	  Normal   Starting                 35s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 35s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node no-preload-093313 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node no-preload-093313 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node no-preload-093313 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           31s                node-controller  Node no-preload-093313 event: Registered Node no-preload-093313 in Controller
	  Normal   NodeReady                14s                kubelet          Node no-preload-093313 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct25 10:36] overlayfs: idmapped layers are currently not supported
	[ +23.146409] overlayfs: idmapped layers are currently not supported
	[ +12.090113] overlayfs: idmapped layers are currently not supported
	[Oct25 10:37] overlayfs: idmapped layers are currently not supported
	[ +24.992751] overlayfs: idmapped layers are currently not supported
	[Oct25 10:38] overlayfs: idmapped layers are currently not supported
	[Oct25 10:39] overlayfs: idmapped layers are currently not supported
	[Oct25 10:40] overlayfs: idmapped layers are currently not supported
	[Oct25 10:41] overlayfs: idmapped layers are currently not supported
	[Oct25 10:43] overlayfs: idmapped layers are currently not supported
	[Oct25 10:45] overlayfs: idmapped layers are currently not supported
	[ +32.163680] overlayfs: idmapped layers are currently not supported
	[Oct25 10:47] overlayfs: idmapped layers are currently not supported
	[Oct25 10:49] overlayfs: idmapped layers are currently not supported
	[Oct25 10:50] overlayfs: idmapped layers are currently not supported
	[Oct25 10:51] overlayfs: idmapped layers are currently not supported
	[  +5.236078] overlayfs: idmapped layers are currently not supported
	[Oct25 10:52] overlayfs: idmapped layers are currently not supported
	[Oct25 10:53] overlayfs: idmapped layers are currently not supported
	[Oct25 10:54] overlayfs: idmapped layers are currently not supported
	[Oct25 10:55] overlayfs: idmapped layers are currently not supported
	[Oct25 10:56] overlayfs: idmapped layers are currently not supported
	[ +41.501413] overlayfs: idmapped layers are currently not supported
	[Oct25 10:57] overlayfs: idmapped layers are currently not supported
	[Oct25 10:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [db46c295e42c58db02fe1c2d0afe72696e539f01fe0c5cc9b644cae51f9db18e] <==
	{"level":"warn","ts":"2025-10-25T10:58:04.444912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:04.464754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:04.492984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:04.512041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:04.530718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:04.578979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:04.593539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:04.624707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:04.646921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:04.683845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:04.733942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:04.754713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:04.770688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:04.790913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:04.823932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:04.845841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:04.890184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:04.909471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:04.956694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:04.975656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:05.023763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:05.039912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:05.059350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:05.075274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:05.171651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47520","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:58:44 up  2:41,  0 user,  load average: 2.97, 3.33, 2.93
	Linux no-preload-093313 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c0b7bd294217e0524a0241b1f58e727060561fa9d7153f9561dd4ffbcc43f38c] <==
	I1025 10:58:19.715095       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:58:19.715368       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:58:19.715555       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:58:19.715577       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:58:19.715589       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:58:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:58:19.917194       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:58:19.917229       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:58:19.917239       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:58:19.917338       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 10:58:20.219038       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:58:20.219155       1 metrics.go:72] Registering metrics
	I1025 10:58:20.219285       1 controller.go:711] "Syncing nftables rules"
	I1025 10:58:29.922057       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:58:29.922110       1 main.go:301] handling current node
	I1025 10:58:39.918911       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:58:39.919040       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0a6d604d1e96de41f72ca8dca0adad65b8a1cfcda754cfa9eac9c6bbfcd1f60f] <==
	I1025 10:58:06.316881       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:58:06.329358       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:58:06.331467       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 10:58:06.358527       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:58:06.358710       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:58:06.482835       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:58:07.015374       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 10:58:07.027858       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 10:58:07.027943       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:58:08.093748       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:58:08.150683       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:58:08.208883       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:58:08.226384       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 10:58:08.236251       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1025 10:58:08.238466       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:58:08.244364       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:58:09.180874       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:58:09.200687       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 10:58:09.227729       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 10:58:14.017340       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1025 10:58:14.017399       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1025 10:58:14.137756       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:58:14.225850       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:58:14.233890       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1025 10:58:41.996502       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:33698: use of closed network connection
	
	
	==> kube-controller-manager [43dee31e8cf66b8b973182957aade0cc8f7f3599c13789fd5bd9003d0b22c7dc] <==
	I1025 10:58:13.273220       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 10:58:13.273973       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 10:58:13.307579       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 10:58:13.332623       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 10:58:13.332709       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 10:58:13.332780       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 10:58:13.332854       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 10:58:13.332925       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 10:58:13.332994       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 10:58:13.343645       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 10:58:13.344059       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:58:13.344385       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:58:13.344488       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 10:58:13.344547       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 10:58:13.361575       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 10:58:13.365614       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:58:13.365662       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 10:58:13.365675       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 10:58:13.406722       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 10:58:13.423655       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:58:13.431236       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:58:13.454943       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:58:13.455132       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:58:13.455164       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:58:33.193248       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [23f47475f8ce28365e549361f6765223a122504e2e034e696051ec3389a5d8a5] <==
	I1025 10:58:15.381178       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:58:15.595382       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:58:15.703335       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:58:15.703380       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 10:58:15.703446       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:58:15.828552       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:58:15.828606       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:58:15.841587       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:58:15.841922       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:58:15.841938       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:58:15.843991       1 config.go:200] "Starting service config controller"
	I1025 10:58:15.844004       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:58:15.844021       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:58:15.844025       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:58:15.844047       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:58:15.844051       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:58:15.856539       1 config.go:309] "Starting node config controller"
	I1025 10:58:15.856554       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:58:15.856561       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:58:15.945093       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:58:15.945134       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:58:15.945216       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4025a23310af75f93b4d4af059c4ebd98f6981c834a8c4582800b409447811c3] <==
	E1025 10:58:06.301203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:58:06.301265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 10:58:06.301315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:58:06.309904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:58:06.309969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 10:58:06.310081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 10:58:06.310124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:58:06.310163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:58:06.310252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 10:58:06.310330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 10:58:06.311767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 10:58:06.311830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 10:58:06.311867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 10:58:06.311908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 10:58:07.198283       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:58:07.275928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:58:07.291958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 10:58:07.322262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:58:07.350318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 10:58:07.359252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:58:07.431476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:58:07.512996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:58:07.603814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 10:58:07.631785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1025 10:58:10.160346       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:58:14 no-preload-093313 kubelet[2002]: I1025 10:58:14.176705    2002 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9b74e355-e50d-43f8-94b8-43fdbad27e8d-cni-cfg\") pod \"kindnet-6tbtt\" (UID: \"9b74e355-e50d-43f8-94b8-43fdbad27e8d\") " pod="kube-system/kindnet-6tbtt"
	Oct 25 10:58:14 no-preload-093313 kubelet[2002]: I1025 10:58:14.176721    2002 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b74e355-e50d-43f8-94b8-43fdbad27e8d-xtables-lock\") pod \"kindnet-6tbtt\" (UID: \"9b74e355-e50d-43f8-94b8-43fdbad27e8d\") " pod="kube-system/kindnet-6tbtt"
	Oct 25 10:58:14 no-preload-093313 kubelet[2002]: E1025 10:58:14.295647    2002 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 25 10:58:14 no-preload-093313 kubelet[2002]: E1025 10:58:14.295700    2002 projected.go:196] Error preparing data for projected volume kube-api-access-qzwz8 for pod kube-system/kube-proxy-vlb79: configmap "kube-root-ca.crt" not found
	Oct 25 10:58:14 no-preload-093313 kubelet[2002]: E1025 10:58:14.295801    2002 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d2476c6-edbd-4e9b-9d21-4a5547f3cdbc-kube-api-access-qzwz8 podName:9d2476c6-edbd-4e9b-9d21-4a5547f3cdbc nodeName:}" failed. No retries permitted until 2025-10-25 10:58:14.795775693 +0000 UTC m=+5.694783260 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qzwz8" (UniqueName: "kubernetes.io/projected/9d2476c6-edbd-4e9b-9d21-4a5547f3cdbc-kube-api-access-qzwz8") pod "kube-proxy-vlb79" (UID: "9d2476c6-edbd-4e9b-9d21-4a5547f3cdbc") : configmap "kube-root-ca.crt" not found
	Oct 25 10:58:14 no-preload-093313 kubelet[2002]: E1025 10:58:14.298072    2002 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 25 10:58:14 no-preload-093313 kubelet[2002]: E1025 10:58:14.298108    2002 projected.go:196] Error preparing data for projected volume kube-api-access-x2jv8 for pod kube-system/kindnet-6tbtt: configmap "kube-root-ca.crt" not found
	Oct 25 10:58:14 no-preload-093313 kubelet[2002]: E1025 10:58:14.298177    2002 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9b74e355-e50d-43f8-94b8-43fdbad27e8d-kube-api-access-x2jv8 podName:9b74e355-e50d-43f8-94b8-43fdbad27e8d nodeName:}" failed. No retries permitted until 2025-10-25 10:58:14.798157368 +0000 UTC m=+5.697164935 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x2jv8" (UniqueName: "kubernetes.io/projected/9b74e355-e50d-43f8-94b8-43fdbad27e8d-kube-api-access-x2jv8") pod "kindnet-6tbtt" (UID: "9b74e355-e50d-43f8-94b8-43fdbad27e8d") : configmap "kube-root-ca.crt" not found
	Oct 25 10:58:14 no-preload-093313 kubelet[2002]: I1025 10:58:14.893274    2002 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 25 10:58:14 no-preload-093313 kubelet[2002]: W1025 10:58:14.983983    2002 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b/crio-d13c74423f5741bbb565c3c62b70acea60639e822a700dc291e44107c3148c23 WatchSource:0}: Error finding container d13c74423f5741bbb565c3c62b70acea60639e822a700dc291e44107c3148c23: Status 404 returned error can't find the container with id d13c74423f5741bbb565c3c62b70acea60639e822a700dc291e44107c3148c23
	Oct 25 10:58:15 no-preload-093313 kubelet[2002]: W1025 10:58:15.159057    2002 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b/crio-bd6c8a36a45574fb12bd43fdf68fec7e386adec1465c6c5be775cec2d0840d91 WatchSource:0}: Error finding container bd6c8a36a45574fb12bd43fdf68fec7e386adec1465c6c5be775cec2d0840d91: Status 404 returned error can't find the container with id bd6c8a36a45574fb12bd43fdf68fec7e386adec1465c6c5be775cec2d0840d91
	Oct 25 10:58:19 no-preload-093313 kubelet[2002]: I1025 10:58:19.522523    2002 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vlb79" podStartSLOduration=5.522501461 podStartE2EDuration="5.522501461s" podCreationTimestamp="2025-10-25 10:58:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:58:15.781199749 +0000 UTC m=+6.680207324" watchObservedRunningTime="2025-10-25 10:58:19.522501461 +0000 UTC m=+10.421509028"
	Oct 25 10:58:30 no-preload-093313 kubelet[2002]: I1025 10:58:30.349916    2002 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 10:58:30 no-preload-093313 kubelet[2002]: I1025 10:58:30.392701    2002 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-6tbtt" podStartSLOduration=12.027379363 podStartE2EDuration="16.39268405s" podCreationTimestamp="2025-10-25 10:58:14 +0000 UTC" firstStartedPulling="2025-10-25 10:58:15.169048648 +0000 UTC m=+6.068056214" lastFinishedPulling="2025-10-25 10:58:19.534353334 +0000 UTC m=+10.433360901" observedRunningTime="2025-10-25 10:58:19.662652238 +0000 UTC m=+10.561659821" watchObservedRunningTime="2025-10-25 10:58:30.39268405 +0000 UTC m=+21.291691617"
	Oct 25 10:58:30 no-preload-093313 kubelet[2002]: I1025 10:58:30.539374    2002 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6q7z\" (UniqueName: \"kubernetes.io/projected/ee976d20-a036-4d38-ad57-a502bf3d0ff7-kube-api-access-x6q7z\") pod \"coredns-66bc5c9577-c56mp\" (UID: \"ee976d20-a036-4d38-ad57-a502bf3d0ff7\") " pod="kube-system/coredns-66bc5c9577-c56mp"
	Oct 25 10:58:30 no-preload-093313 kubelet[2002]: I1025 10:58:30.539466    2002 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/335dab10-1baa-4bca-afa1-0ccae3bddad5-tmp\") pod \"storage-provisioner\" (UID: \"335dab10-1baa-4bca-afa1-0ccae3bddad5\") " pod="kube-system/storage-provisioner"
	Oct 25 10:58:30 no-preload-093313 kubelet[2002]: I1025 10:58:30.539509    2002 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee976d20-a036-4d38-ad57-a502bf3d0ff7-config-volume\") pod \"coredns-66bc5c9577-c56mp\" (UID: \"ee976d20-a036-4d38-ad57-a502bf3d0ff7\") " pod="kube-system/coredns-66bc5c9577-c56mp"
	Oct 25 10:58:30 no-preload-093313 kubelet[2002]: I1025 10:58:30.539528    2002 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs989\" (UniqueName: \"kubernetes.io/projected/335dab10-1baa-4bca-afa1-0ccae3bddad5-kube-api-access-rs989\") pod \"storage-provisioner\" (UID: \"335dab10-1baa-4bca-afa1-0ccae3bddad5\") " pod="kube-system/storage-provisioner"
	Oct 25 10:58:30 no-preload-093313 kubelet[2002]: W1025 10:58:30.710344    2002 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b/crio-bb2d418885ffb7b7e61bd5c6d611d76fe4455a8bafe556642d8a81ae1e23c0b3 WatchSource:0}: Error finding container bb2d418885ffb7b7e61bd5c6d611d76fe4455a8bafe556642d8a81ae1e23c0b3: Status 404 returned error can't find the container with id bb2d418885ffb7b7e61bd5c6d611d76fe4455a8bafe556642d8a81ae1e23c0b3
	Oct 25 10:58:30 no-preload-093313 kubelet[2002]: W1025 10:58:30.787026    2002 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b/crio-5c074a8e3297a3460f898dbc153e61b6b5a808f2aa9cdc13d4ff16946bfa1b89 WatchSource:0}: Error finding container 5c074a8e3297a3460f898dbc153e61b6b5a808f2aa9cdc13d4ff16946bfa1b89: Status 404 returned error can't find the container with id 5c074a8e3297a3460f898dbc153e61b6b5a808f2aa9cdc13d4ff16946bfa1b89
	Oct 25 10:58:31 no-preload-093313 kubelet[2002]: I1025 10:58:31.731025    2002 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-c56mp" podStartSLOduration=17.731003334 podStartE2EDuration="17.731003334s" podCreationTimestamp="2025-10-25 10:58:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:58:31.707235957 +0000 UTC m=+22.606243540" watchObservedRunningTime="2025-10-25 10:58:31.731003334 +0000 UTC m=+22.630010901"
	Oct 25 10:58:33 no-preload-093313 kubelet[2002]: I1025 10:58:33.793194    2002 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=17.793173221 podStartE2EDuration="17.793173221s" podCreationTimestamp="2025-10-25 10:58:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:58:31.773173306 +0000 UTC m=+22.672180897" watchObservedRunningTime="2025-10-25 10:58:33.793173221 +0000 UTC m=+24.692180796"
	Oct 25 10:58:33 no-preload-093313 kubelet[2002]: I1025 10:58:33.865778    2002 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v99b9\" (UniqueName: \"kubernetes.io/projected/75418b38-6328-42b9-b710-7cee6dc929c2-kube-api-access-v99b9\") pod \"busybox\" (UID: \"75418b38-6328-42b9-b710-7cee6dc929c2\") " pod="default/busybox"
	Oct 25 10:58:34 no-preload-093313 kubelet[2002]: W1025 10:58:34.145486    2002 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b/crio-741f89876095f95bb3a9c1f32a756a628587ae541533aa340c4a41d3b4f4737c WatchSource:0}: Error finding container 741f89876095f95bb3a9c1f32a756a628587ae541533aa340c4a41d3b4f4737c: Status 404 returned error can't find the container with id 741f89876095f95bb3a9c1f32a756a628587ae541533aa340c4a41d3b4f4737c
	Oct 25 10:58:36 no-preload-093313 kubelet[2002]: I1025 10:58:36.712035    2002 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.6839464830000002 podStartE2EDuration="3.712018165s" podCreationTimestamp="2025-10-25 10:58:33 +0000 UTC" firstStartedPulling="2025-10-25 10:58:34.149448421 +0000 UTC m=+25.048455987" lastFinishedPulling="2025-10-25 10:58:36.177520094 +0000 UTC m=+27.076527669" observedRunningTime="2025-10-25 10:58:36.711435883 +0000 UTC m=+27.610443458" watchObservedRunningTime="2025-10-25 10:58:36.712018165 +0000 UTC m=+27.611025740"
	
	
	==> storage-provisioner [79aa9c08469e0d63c433f7f00cec0b0ab943508f58b51bd799e23b984e887b9a] <==
	I1025 10:58:30.838281       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:58:30.883249       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:58:30.883394       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 10:58:30.898415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:58:30.968678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:58:30.968945       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:58:30.969409       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"77448026-dc1c-4f90-a3be-98f6a3fbe47d", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-093313_2fba4b03-23bf-4e2e-953c-e857550b1073 became leader
	I1025 10:58:30.971519       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-093313_2fba4b03-23bf-4e2e-953c-e857550b1073!
	W1025 10:58:30.980557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:58:30.989140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:58:31.072659       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-093313_2fba4b03-23bf-4e2e-953c-e857550b1073!
	W1025 10:58:32.992980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:58:32.999208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:58:35.004132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:58:35.011280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:58:37.015583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:58:37.024673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:58:39.027909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:58:39.038311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:58:41.042320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:58:41.049533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:58:43.057469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:58:43.074404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-093313 -n no-preload-093313
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-093313 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-374679 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-374679 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (286.549605ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:58:53Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-374679 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-374679
helpers_test.go:243: (dbg) docker inspect newest-cni-374679:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "132f6b53f3213a69fe3b488d3450493a65028241a62ba6d3a53b4d721e1e148d",
	        "Created": "2025-10-25T10:58:18.030527549Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 463307,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:58:18.135497028Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/132f6b53f3213a69fe3b488d3450493a65028241a62ba6d3a53b4d721e1e148d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/132f6b53f3213a69fe3b488d3450493a65028241a62ba6d3a53b4d721e1e148d/hostname",
	        "HostsPath": "/var/lib/docker/containers/132f6b53f3213a69fe3b488d3450493a65028241a62ba6d3a53b4d721e1e148d/hosts",
	        "LogPath": "/var/lib/docker/containers/132f6b53f3213a69fe3b488d3450493a65028241a62ba6d3a53b4d721e1e148d/132f6b53f3213a69fe3b488d3450493a65028241a62ba6d3a53b4d721e1e148d-json.log",
	        "Name": "/newest-cni-374679",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-374679:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-374679",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "132f6b53f3213a69fe3b488d3450493a65028241a62ba6d3a53b4d721e1e148d",
	                "LowerDir": "/var/lib/docker/overlay2/1178d1621d712233b729c8b344a40f947ea6d8f0eb289643c9931db7c67e1eb5-init/diff:/var/lib/docker/overlay2/a746402fafa86965bd9394d930bb96ec16982037f9f5e7f4f1de6d09576ff851/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1178d1621d712233b729c8b344a40f947ea6d8f0eb289643c9931db7c67e1eb5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1178d1621d712233b729c8b344a40f947ea6d8f0eb289643c9931db7c67e1eb5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1178d1621d712233b729c8b344a40f947ea6d8f0eb289643c9931db7c67e1eb5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-374679",
	                "Source": "/var/lib/docker/volumes/newest-cni-374679/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-374679",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-374679",
	                "name.minikube.sigs.k8s.io": "newest-cni-374679",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0df0dd1aba0b7f3010ca6e60813b3da830ca49c9ceefb4aac56ad124f1cf45b4",
	            "SandboxKey": "/var/run/docker/netns/0df0dd1aba0b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-374679": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:ee:bd:53:2c:de",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "58611ffe5362d6a9d68586194cffae78efe127e4ab53288bcccd59ddf919e4bd",
	                    "EndpointID": "0fa1805a4b9a01398fa93c02848e6827664548505163eb9eb58a157ed5f5a351",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-374679",
	                        "132f6b53f321"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-374679 -n newest-cni-374679
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-374679 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-374679 logs -n 25: (1.155926638s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-736062 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-736062       │ jenkins │ v1.37.0 │ 25 Oct 25 10:54 UTC │ 25 Oct 25 10:55 UTC │
	│ delete  │ -p cert-expiration-736062                                                                                                                                                                                                                     │ cert-expiration-736062       │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │ 25 Oct 25 10:55 UTC │
	│ start   │ -p embed-certs-348342 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │ 25 Oct 25 10:56 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-223394 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-223394 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:55 UTC │ 25 Oct 25 10:56 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-223394 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:56 UTC │
	│ start   │ -p default-k8s-diff-port-223394 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-348342 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │                     │
	│ stop    │ -p embed-certs-348342 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-348342 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:56 UTC │
	│ start   │ -p embed-certs-348342 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:57 UTC │
	│ image   │ default-k8s-diff-port-223394 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ pause   │ -p default-k8s-diff-port-223394 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-223394                                                                                                                                                                                                               │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ delete  │ -p default-k8s-diff-port-223394                                                                                                                                                                                                               │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ delete  │ -p disable-driver-mounts-487220                                                                                                                                                                                                               │ disable-driver-mounts-487220 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ start   │ -p no-preload-093313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:58 UTC │
	│ image   │ embed-certs-348342 image list --format=json                                                                                                                                                                                                   │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ pause   │ -p embed-certs-348342 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │                     │
	│ delete  │ -p embed-certs-348342                                                                                                                                                                                                                         │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ delete  │ -p embed-certs-348342                                                                                                                                                                                                                         │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ start   │ -p newest-cni-374679 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ addons  │ enable metrics-server -p no-preload-093313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │                     │
	│ stop    │ -p no-preload-093313 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-374679 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:58:10
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:58:10.782425  462579 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:58:10.782606  462579 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:58:10.782617  462579 out.go:374] Setting ErrFile to fd 2...
	I1025 10:58:10.782622  462579 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:58:10.782906  462579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:58:10.783330  462579 out.go:368] Setting JSON to false
	I1025 10:58:10.784320  462579 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9642,"bootTime":1761380249,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 10:58:10.784384  462579 start.go:141] virtualization:  
	I1025 10:58:10.790187  462579 out.go:179] * [newest-cni-374679] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:58:10.797531  462579 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:58:10.797608  462579 notify.go:220] Checking for updates...
	I1025 10:58:10.804696  462579 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:58:10.807606  462579 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:58:10.810593  462579 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 10:58:10.813557  462579 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:58:10.816529  462579 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:58:10.819983  462579 config.go:182] Loaded profile config "no-preload-093313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:58:10.820085  462579 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:58:10.862342  462579 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:58:10.862532  462579 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:58:10.964426  462579 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:58:10.954410639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:58:10.964531  462579 docker.go:318] overlay module found
	I1025 10:58:10.967638  462579 out.go:179] * Using the docker driver based on user configuration
	I1025 10:58:10.971968  462579 start.go:305] selected driver: docker
	I1025 10:58:10.971990  462579 start.go:925] validating driver "docker" against <nil>
	I1025 10:58:10.972003  462579 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:58:10.972733  462579 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:58:11.034078  462579 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:58:11.024060169 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:58:11.034246  462579 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1025 10:58:11.034276  462579 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1025 10:58:11.034517  462579 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 10:58:11.037415  462579 out.go:179] * Using Docker driver with root privileges
	I1025 10:58:11.040249  462579 cni.go:84] Creating CNI manager for ""
	I1025 10:58:11.040322  462579 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:58:11.040338  462579 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:58:11.040426  462579 start.go:349] cluster config:
	{Name:newest-cni-374679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-374679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:58:11.043617  462579 out.go:179] * Starting "newest-cni-374679" primary control-plane node in "newest-cni-374679" cluster
	I1025 10:58:11.046424  462579 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:58:11.049354  462579 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:58:11.052782  462579 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:58:11.052900  462579 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:58:11.053261  462579 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 10:58:11.053273  462579 cache.go:58] Caching tarball of preloaded images
	I1025 10:58:11.053356  462579 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:58:11.053366  462579 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:58:11.053487  462579 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/config.json ...
	I1025 10:58:11.053505  462579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/config.json: {Name:mk06bee1cbe95c7bc000c8c241bf490be28f8c58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:58:11.074565  462579 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:58:11.074589  462579 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:58:11.074603  462579 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:58:11.074641  462579 start.go:360] acquireMachinesLock for newest-cni-374679: {Name:mk7780b51c2c05e33336bc6c0b82ed21676e1544 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:58:11.074753  462579 start.go:364] duration metric: took 87.287µs to acquireMachinesLock for "newest-cni-374679"
	I1025 10:58:11.074823  462579 start.go:93] Provisioning new machine with config: &{Name:newest-cni-374679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-374679 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:58:11.074900  462579 start.go:125] createHost starting for "" (driver="docker")
	I1025 10:58:09.779161  458353 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 10:58:09.789684  458353 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 10:58:09.789720  458353 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 10:58:09.805146  458353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 10:58:10.383326  458353 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 10:58:10.383615  458353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:10.383677  458353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-093313 minikube.k8s.io/updated_at=2025_10_25T10_58_10_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689 minikube.k8s.io/name=no-preload-093313 minikube.k8s.io/primary=true
	I1025 10:58:10.454743  458353 ops.go:34] apiserver oom_adj: -16
	I1025 10:58:10.673829  458353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:11.174151  458353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:11.674123  458353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:12.174870  458353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:12.673935  458353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:13.174146  458353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:13.673911  458353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:14.174881  458353 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:14.349095  458353 kubeadm.go:1113] duration metric: took 3.965535203s to wait for elevateKubeSystemPrivileges
	I1025 10:58:14.349124  458353 kubeadm.go:402] duration metric: took 23.73586495s to StartCluster
	I1025 10:58:14.349141  458353 settings.go:142] acquiring lock: {Name:mk78071aea90144fb2e8f63b90de4792d7a3522a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:58:14.349203  458353 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:58:14.349881  458353 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:58:14.350121  458353 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:58:14.350251  458353 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 10:58:14.350510  458353 config.go:182] Loaded profile config "no-preload-093313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:58:14.350548  458353 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:58:14.350607  458353 addons.go:69] Setting storage-provisioner=true in profile "no-preload-093313"
	I1025 10:58:14.350621  458353 addons.go:238] Setting addon storage-provisioner=true in "no-preload-093313"
	I1025 10:58:14.350643  458353 host.go:66] Checking if "no-preload-093313" exists ...
	I1025 10:58:14.351140  458353 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Status}}
	I1025 10:58:14.351645  458353 addons.go:69] Setting default-storageclass=true in profile "no-preload-093313"
	I1025 10:58:14.351667  458353 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-093313"
	I1025 10:58:14.351934  458353 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Status}}
	I1025 10:58:14.353608  458353 out.go:179] * Verifying Kubernetes components...
	I1025 10:58:14.360245  458353 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:58:14.394551  458353 addons.go:238] Setting addon default-storageclass=true in "no-preload-093313"
	I1025 10:58:14.394591  458353 host.go:66] Checking if "no-preload-093313" exists ...
	I1025 10:58:14.395020  458353 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Status}}
	I1025 10:58:14.396324  458353 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:58:11.078392  462579 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:58:11.078654  462579 start.go:159] libmachine.API.Create for "newest-cni-374679" (driver="docker")
	I1025 10:58:11.078693  462579 client.go:168] LocalClient.Create starting
	I1025 10:58:11.078770  462579 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem
	I1025 10:58:11.078815  462579 main.go:141] libmachine: Decoding PEM data...
	I1025 10:58:11.078832  462579 main.go:141] libmachine: Parsing certificate...
	I1025 10:58:11.078885  462579 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem
	I1025 10:58:11.078911  462579 main.go:141] libmachine: Decoding PEM data...
	I1025 10:58:11.078930  462579 main.go:141] libmachine: Parsing certificate...
	I1025 10:58:11.079301  462579 cli_runner.go:164] Run: docker network inspect newest-cni-374679 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:58:11.096284  462579 cli_runner.go:211] docker network inspect newest-cni-374679 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:58:11.096377  462579 network_create.go:284] running [docker network inspect newest-cni-374679] to gather additional debugging logs...
	I1025 10:58:11.096400  462579 cli_runner.go:164] Run: docker network inspect newest-cni-374679
	W1025 10:58:11.120347  462579 cli_runner.go:211] docker network inspect newest-cni-374679 returned with exit code 1
	I1025 10:58:11.120391  462579 network_create.go:287] error running [docker network inspect newest-cni-374679]: docker network inspect newest-cni-374679: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-374679 not found
	I1025 10:58:11.120407  462579 network_create.go:289] output of [docker network inspect newest-cni-374679]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-374679 not found
	
	** /stderr **
	I1025 10:58:11.120504  462579 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:58:11.136990  462579 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2218a4d410c8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:56:a0:c3:54:c6:1f} reservation:<nil>}
	I1025 10:58:11.137518  462579 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-249eaf2d238d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:87:b9:4d:4c:0d} reservation:<nil>}
	I1025 10:58:11.137934  462579 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-210d4b236ff6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:d5:32:45:e6:85} reservation:<nil>}
	I1025 10:58:11.138642  462579 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019da0e0}
	I1025 10:58:11.138667  462579 network_create.go:124] attempt to create docker network newest-cni-374679 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 10:58:11.138732  462579 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-374679 newest-cni-374679
	I1025 10:58:11.217869  462579 network_create.go:108] docker network newest-cni-374679 192.168.76.0/24 created
	I1025 10:58:11.217906  462579 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-374679" container
	I1025 10:58:11.218089  462579 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:58:11.236221  462579 cli_runner.go:164] Run: docker volume create newest-cni-374679 --label name.minikube.sigs.k8s.io=newest-cni-374679 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:58:11.262082  462579 oci.go:103] Successfully created a docker volume newest-cni-374679
	I1025 10:58:11.262205  462579 cli_runner.go:164] Run: docker run --rm --name newest-cni-374679-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-374679 --entrypoint /usr/bin/test -v newest-cni-374679:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:58:11.833567  462579 oci.go:107] Successfully prepared a docker volume newest-cni-374679
	I1025 10:58:11.833628  462579 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:58:11.833647  462579 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 10:58:11.833717  462579 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-374679:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 10:58:14.399224  458353 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:58:14.399247  458353 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:58:14.399311  458353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:58:14.429544  458353 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:58:14.429565  458353 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:58:14.429636  458353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:58:14.448016  458353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:58:14.472233  458353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:58:14.837926  458353 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:58:14.838104  458353 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 10:58:14.851705  458353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:58:14.939614  458353 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:58:16.153416  458353 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.315225368s)
	I1025 10:58:16.153442  458353 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1025 10:58:16.154538  458353 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.316502508s)
	I1025 10:58:16.159219  458353 node_ready.go:35] waiting up to 6m0s for node "no-preload-093313" to be "Ready" ...
	I1025 10:58:16.671320  458353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.819582644s)
	I1025 10:58:16.671420  458353 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.731785964s)
	I1025 10:58:16.689031  458353 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-093313" context rescaled to 1 replicas
	I1025 10:58:16.702345  458353 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 10:58:16.707329  458353 addons.go:514] duration metric: took 2.35675501s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 10:58:17.917384  462579 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-374679:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (6.08361938s)
	I1025 10:58:17.917428  462579 kic.go:203] duration metric: took 6.083776609s to extract preloaded images to volume ...
	W1025 10:58:17.917554  462579 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 10:58:17.917669  462579 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 10:58:18.012935  462579 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-374679 --name newest-cni-374679 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-374679 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-374679 --network newest-cni-374679 --ip 192.168.76.2 --volume newest-cni-374679:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 10:58:18.424295  462579 cli_runner.go:164] Run: docker container inspect newest-cni-374679 --format={{.State.Running}}
	I1025 10:58:18.453769  462579 cli_runner.go:164] Run: docker container inspect newest-cni-374679 --format={{.State.Status}}
	I1025 10:58:18.479989  462579 cli_runner.go:164] Run: docker exec newest-cni-374679 stat /var/lib/dpkg/alternatives/iptables
	I1025 10:58:18.548187  462579 oci.go:144] the created container "newest-cni-374679" has a running status.
	I1025 10:58:18.548223  462579 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa...
	I1025 10:58:19.383457  462579 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 10:58:19.405875  462579 cli_runner.go:164] Run: docker container inspect newest-cni-374679 --format={{.State.Status}}
	I1025 10:58:19.425688  462579 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 10:58:19.425708  462579 kic_runner.go:114] Args: [docker exec --privileged newest-cni-374679 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 10:58:19.479976  462579 cli_runner.go:164] Run: docker container inspect newest-cni-374679 --format={{.State.Status}}
	I1025 10:58:19.520158  462579 machine.go:93] provisionDockerMachine start ...
	I1025 10:58:19.520251  462579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:58:19.549763  462579 main.go:141] libmachine: Using SSH client type: native
	I1025 10:58:19.550157  462579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1025 10:58:19.550169  462579 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:58:19.552534  462579 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57564->127.0.0.1:33443: read: connection reset by peer
	W1025 10:58:18.165538  458353 node_ready.go:57] node "no-preload-093313" has "Ready":"False" status (will retry)
	W1025 10:58:20.663455  458353 node_ready.go:57] node "no-preload-093313" has "Ready":"False" status (will retry)
	I1025 10:58:22.709700  462579 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-374679
	
	I1025 10:58:22.709725  462579 ubuntu.go:182] provisioning hostname "newest-cni-374679"
	I1025 10:58:22.709796  462579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:58:22.726203  462579 main.go:141] libmachine: Using SSH client type: native
	I1025 10:58:22.726540  462579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1025 10:58:22.726560  462579 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-374679 && echo "newest-cni-374679" | sudo tee /etc/hostname
	I1025 10:58:22.887491  462579 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-374679
	
	I1025 10:58:22.887586  462579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:58:22.906626  462579 main.go:141] libmachine: Using SSH client type: native
	I1025 10:58:22.906939  462579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1025 10:58:22.906965  462579 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-374679' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-374679/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-374679' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:58:23.058779  462579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:58:23.058810  462579 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:58:23.058835  462579 ubuntu.go:190] setting up certificates
	I1025 10:58:23.058846  462579 provision.go:84] configureAuth start
	I1025 10:58:23.058911  462579 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-374679
	I1025 10:58:23.081396  462579 provision.go:143] copyHostCerts
	I1025 10:58:23.081474  462579 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:58:23.081484  462579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:58:23.081619  462579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:58:23.081779  462579 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:58:23.081787  462579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:58:23.081821  462579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:58:23.081884  462579 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:58:23.081889  462579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:58:23.081912  462579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:58:23.081972  462579 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.newest-cni-374679 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-374679]
	I1025 10:58:24.019205  462579 provision.go:177] copyRemoteCerts
	I1025 10:58:24.019281  462579 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:58:24.019331  462579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:58:24.037852  462579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:58:24.151428  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:58:24.172253  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:58:24.190664  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:58:24.210027  462579 provision.go:87] duration metric: took 1.151157349s to configureAuth
	I1025 10:58:24.210055  462579 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:58:24.210251  462579 config.go:182] Loaded profile config "newest-cni-374679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:58:24.210374  462579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:58:24.227593  462579 main.go:141] libmachine: Using SSH client type: native
	I1025 10:58:24.227917  462579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1025 10:58:24.227939  462579 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:58:24.519205  462579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:58:24.519230  462579 machine.go:96] duration metric: took 4.999052802s to provisionDockerMachine
	I1025 10:58:24.519241  462579 client.go:171] duration metric: took 13.440536059s to LocalClient.Create
	I1025 10:58:24.519255  462579 start.go:167] duration metric: took 13.440604153s to libmachine.API.Create "newest-cni-374679"
	I1025 10:58:24.519263  462579 start.go:293] postStartSetup for "newest-cni-374679" (driver="docker")
	I1025 10:58:24.519273  462579 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:58:24.519341  462579 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:58:24.519388  462579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:58:24.544709  462579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:58:24.650408  462579 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:58:24.655742  462579 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:58:24.655781  462579 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:58:24.655793  462579 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:58:24.655899  462579 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:58:24.656031  462579 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:58:24.656145  462579 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:58:24.665932  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:58:24.685599  462579 start.go:296] duration metric: took 166.321096ms for postStartSetup
	I1025 10:58:24.686160  462579 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-374679
	I1025 10:58:24.703228  462579 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/config.json ...
	I1025 10:58:24.703521  462579 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:58:24.703570  462579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:58:24.721678  462579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:58:24.823202  462579 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:58:24.827928  462579 start.go:128] duration metric: took 13.753011448s to createHost
	I1025 10:58:24.827955  462579 start.go:83] releasing machines lock for "newest-cni-374679", held for 13.753151625s
	I1025 10:58:24.828057  462579 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-374679
	I1025 10:58:24.844247  462579 ssh_runner.go:195] Run: cat /version.json
	I1025 10:58:24.844306  462579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:58:24.844567  462579 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:58:24.844632  462579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:58:24.862934  462579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:58:24.879396  462579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:58:24.974041  462579 ssh_runner.go:195] Run: systemctl --version
	I1025 10:58:25.077623  462579 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:58:25.117623  462579 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:58:25.122317  462579 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:58:25.122444  462579 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:58:25.152689  462579 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 10:58:25.152728  462579 start.go:495] detecting cgroup driver to use...
	I1025 10:58:25.152765  462579 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:58:25.152826  462579 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:58:25.172681  462579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:58:25.186727  462579 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:58:25.186803  462579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:58:25.205457  462579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:58:25.225572  462579 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:58:25.351681  462579 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:58:25.486668  462579 docker.go:234] disabling docker service ...
	I1025 10:58:25.486739  462579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:58:25.512544  462579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:58:25.529315  462579 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:58:25.663344  462579 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:58:25.787647  462579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:58:25.801143  462579 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:58:25.816415  462579 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:58:25.816486  462579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:58:25.826095  462579 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:58:25.826169  462579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:58:25.835772  462579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:58:25.846318  462579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:58:25.855407  462579 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:58:25.863923  462579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:58:25.873128  462579 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:58:25.888440  462579 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:58:25.897870  462579 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:58:25.906296  462579 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:58:25.914338  462579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:58:26.040855  462579 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:58:26.166487  462579 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:58:26.166560  462579 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:58:26.170685  462579 start.go:563] Will wait 60s for crictl version
	I1025 10:58:26.170877  462579 ssh_runner.go:195] Run: which crictl
	I1025 10:58:26.174971  462579 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:58:26.200539  462579 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:58:26.200635  462579 ssh_runner.go:195] Run: crio --version
	I1025 10:58:26.229645  462579 ssh_runner.go:195] Run: crio --version
	I1025 10:58:26.263789  462579 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:58:26.266758  462579 cli_runner.go:164] Run: docker network inspect newest-cni-374679 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:58:26.282561  462579 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 10:58:26.286421  462579 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:58:26.299427  462579 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1025 10:58:23.162887  458353 node_ready.go:57] node "no-preload-093313" has "Ready":"False" status (will retry)
	W1025 10:58:25.664785  458353 node_ready.go:57] node "no-preload-093313" has "Ready":"False" status (will retry)
	I1025 10:58:26.302256  462579 kubeadm.go:883] updating cluster {Name:newest-cni-374679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-374679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:58:26.302400  462579 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:58:26.302482  462579 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:58:26.346684  462579 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:58:26.346711  462579 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:58:26.346768  462579 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:58:26.373796  462579 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:58:26.373820  462579 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:58:26.373828  462579 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1025 10:58:26.373925  462579 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-374679 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-374679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:58:26.374039  462579 ssh_runner.go:195] Run: crio config
	I1025 10:58:26.425799  462579 cni.go:84] Creating CNI manager for ""
	I1025 10:58:26.425823  462579 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:58:26.425844  462579 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1025 10:58:26.425871  462579 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-374679 NodeName:newest-cni-374679 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:58:26.426079  462579 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-374679"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:58:26.426159  462579 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:58:26.433782  462579 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:58:26.433876  462579 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:58:26.441241  462579 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 10:58:26.453959  462579 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:58:26.467788  462579 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1025 10:58:26.486131  462579 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:58:26.489560  462579 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:58:26.499774  462579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:58:26.628120  462579 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:58:26.646609  462579 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679 for IP: 192.168.76.2
	I1025 10:58:26.646628  462579 certs.go:195] generating shared ca certs ...
	I1025 10:58:26.646644  462579 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:58:26.646797  462579 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:58:26.646848  462579 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:58:26.646861  462579 certs.go:257] generating profile certs ...
	I1025 10:58:26.646915  462579 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/client.key
	I1025 10:58:26.646932  462579 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/client.crt with IP's: []
	I1025 10:58:27.975991  462579 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/client.crt ...
	I1025 10:58:27.976029  462579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/client.crt: {Name:mk33f9548b2e8e050334262e4e13576b670afc14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:58:27.976235  462579 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/client.key ...
	I1025 10:58:27.976249  462579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/client.key: {Name:mk1237d6927ddef67436b0ac9efba3211b433c17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:58:27.976350  462579 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.key.de28dca6
	I1025 10:58:27.976367  462579 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.crt.de28dca6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1025 10:58:28.631503  462579 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.crt.de28dca6 ...
	I1025 10:58:28.631536  462579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.crt.de28dca6: {Name:mk18b35fb2c66fb75733fa3fccef46e3d42071f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:58:28.631730  462579 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.key.de28dca6 ...
	I1025 10:58:28.631745  462579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.key.de28dca6: {Name:mk8e642326cbbab18dd4eeab2907fcc966b9062e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:58:28.631833  462579 certs.go:382] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.crt.de28dca6 -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.crt
	I1025 10:58:28.631917  462579 certs.go:386] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.key.de28dca6 -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.key
	I1025 10:58:28.631983  462579 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/proxy-client.key
	I1025 10:58:28.632001  462579 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/proxy-client.crt with IP's: []
	I1025 10:58:30.131642  462579 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/proxy-client.crt ...
	I1025 10:58:30.131680  462579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/proxy-client.crt: {Name:mk95084b72f2abc40fa7e538044505840687a45f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:58:30.131908  462579 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/proxy-client.key ...
	I1025 10:58:30.131923  462579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/proxy-client.key: {Name:mk78a365eceeac82cee17c21ec1560ea43b277f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:58:30.132139  462579 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:58:30.132184  462579 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:58:30.132197  462579 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:58:30.132224  462579 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:58:30.132250  462579 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:58:30.132306  462579 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:58:30.132359  462579 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:58:30.133068  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:58:30.154948  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:58:30.179465  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:58:30.200201  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:58:30.220850  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 10:58:30.241805  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:58:30.263856  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:58:30.284447  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:58:30.303171  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:58:30.322948  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:58:30.341835  462579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:58:30.367717  462579 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:58:30.384756  462579 ssh_runner.go:195] Run: openssl version
	I1025 10:58:30.398708  462579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:58:30.435481  462579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:58:30.443778  462579 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:58:30.443864  462579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:58:30.498785  462579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:58:30.508375  462579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:58:30.517389  462579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:58:30.521155  462579 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:58:30.521239  462579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:58:30.562817  462579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:58:30.579656  462579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:58:30.588182  462579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:58:30.592003  462579 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:58:30.592082  462579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:58:30.633302  462579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:58:30.644308  462579 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:58:30.650275  462579 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 10:58:30.650343  462579 kubeadm.go:400] StartCluster: {Name:newest-cni-374679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-374679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:58:30.650433  462579 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:58:30.650493  462579 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:58:30.716801  462579 cri.go:89] found id: ""
	I1025 10:58:30.716917  462579 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:58:30.739257  462579 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:58:30.747646  462579 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 10:58:30.747717  462579 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:58:30.756236  462579 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 10:58:30.756254  462579 kubeadm.go:157] found existing configuration files:
	
	I1025 10:58:30.756306  462579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 10:58:30.766706  462579 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 10:58:30.766768  462579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:58:30.776732  462579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 10:58:30.791129  462579 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 10:58:30.791202  462579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:58:30.799420  462579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 10:58:30.810058  462579 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 10:58:30.810126  462579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:58:30.821501  462579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 10:58:30.835608  462579 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 10:58:30.835674  462579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:58:30.846611  462579 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 10:58:30.916189  462579 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 10:58:30.916582  462579 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:58:30.957151  462579 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 10:58:30.957323  462579 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 10:58:30.957405  462579 kubeadm.go:318] OS: Linux
	I1025 10:58:30.957479  462579 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 10:58:30.957564  462579 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 10:58:30.957646  462579 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 10:58:30.957728  462579 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 10:58:30.957817  462579 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 10:58:30.957928  462579 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 10:58:30.958030  462579 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 10:58:30.958135  462579 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 10:58:30.958229  462579 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 10:58:31.052897  462579 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:58:31.053087  462579 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:58:31.053221  462579 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 10:58:31.063147  462579 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1025 10:58:28.162660  458353 node_ready.go:57] node "no-preload-093313" has "Ready":"False" status (will retry)
	W1025 10:58:30.162818  458353 node_ready.go:57] node "no-preload-093313" has "Ready":"False" status (will retry)
	I1025 10:58:30.677420  458353 node_ready.go:49] node "no-preload-093313" is "Ready"
	I1025 10:58:30.677451  458353 node_ready.go:38] duration metric: took 14.5181592s for node "no-preload-093313" to be "Ready" ...
	I1025 10:58:30.677465  458353 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:58:30.677529  458353 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:58:30.693880  458353 api_server.go:72] duration metric: took 16.343726661s to wait for apiserver process to appear ...
	I1025 10:58:30.693903  458353 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:58:30.693923  458353 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 10:58:30.716726  458353 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 10:58:30.719040  458353 api_server.go:141] control plane version: v1.34.1
	I1025 10:58:30.719070  458353 api_server.go:131] duration metric: took 25.158864ms to wait for apiserver health ...
	I1025 10:58:30.719080  458353 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:58:30.725248  458353 system_pods.go:59] 8 kube-system pods found
	I1025 10:58:30.725291  458353 system_pods.go:61] "coredns-66bc5c9577-c56mp" [ee976d20-a036-4d38-ad57-a502bf3d0ff7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:58:30.725298  458353 system_pods.go:61] "etcd-no-preload-093313" [83fac023-8769-42f1-bb01-7b45b695a20f] Running
	I1025 10:58:30.725305  458353 system_pods.go:61] "kindnet-6tbtt" [9b74e355-e50d-43f8-94b8-43fdbad27e8d] Running
	I1025 10:58:30.725309  458353 system_pods.go:61] "kube-apiserver-no-preload-093313" [5b7a2f41-bfcc-4460-bc30-242d59d2cfa4] Running
	I1025 10:58:30.725315  458353 system_pods.go:61] "kube-controller-manager-no-preload-093313" [890c70e0-54a4-4423-bbd0-245fbbae3273] Running
	I1025 10:58:30.725319  458353 system_pods.go:61] "kube-proxy-vlb79" [9d2476c6-edbd-4e9b-9d21-4a5547f3cdbc] Running
	I1025 10:58:30.725324  458353 system_pods.go:61] "kube-scheduler-no-preload-093313" [6376071f-6220-481e-b3ec-fed60fe4f008] Running
	I1025 10:58:30.725330  458353 system_pods.go:61] "storage-provisioner" [335dab10-1baa-4bca-afa1-0ccae3bddad5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:58:30.725342  458353 system_pods.go:74] duration metric: took 6.255711ms to wait for pod list to return data ...
	I1025 10:58:30.725355  458353 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:58:30.728233  458353 default_sa.go:45] found service account: "default"
	I1025 10:58:30.728258  458353 default_sa.go:55] duration metric: took 2.896001ms for default service account to be created ...
	I1025 10:58:30.728268  458353 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:58:30.731868  458353 system_pods.go:86] 8 kube-system pods found
	I1025 10:58:30.731907  458353 system_pods.go:89] "coredns-66bc5c9577-c56mp" [ee976d20-a036-4d38-ad57-a502bf3d0ff7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:58:30.731913  458353 system_pods.go:89] "etcd-no-preload-093313" [83fac023-8769-42f1-bb01-7b45b695a20f] Running
	I1025 10:58:30.731921  458353 system_pods.go:89] "kindnet-6tbtt" [9b74e355-e50d-43f8-94b8-43fdbad27e8d] Running
	I1025 10:58:30.731926  458353 system_pods.go:89] "kube-apiserver-no-preload-093313" [5b7a2f41-bfcc-4460-bc30-242d59d2cfa4] Running
	I1025 10:58:30.731930  458353 system_pods.go:89] "kube-controller-manager-no-preload-093313" [890c70e0-54a4-4423-bbd0-245fbbae3273] Running
	I1025 10:58:30.731935  458353 system_pods.go:89] "kube-proxy-vlb79" [9d2476c6-edbd-4e9b-9d21-4a5547f3cdbc] Running
	I1025 10:58:30.731940  458353 system_pods.go:89] "kube-scheduler-no-preload-093313" [6376071f-6220-481e-b3ec-fed60fe4f008] Running
	I1025 10:58:30.731945  458353 system_pods.go:89] "storage-provisioner" [335dab10-1baa-4bca-afa1-0ccae3bddad5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:58:30.731961  458353 retry.go:31] will retry after 221.305441ms: missing components: kube-dns
	I1025 10:58:30.962307  458353 system_pods.go:86] 8 kube-system pods found
	I1025 10:58:30.962344  458353 system_pods.go:89] "coredns-66bc5c9577-c56mp" [ee976d20-a036-4d38-ad57-a502bf3d0ff7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:58:30.962351  458353 system_pods.go:89] "etcd-no-preload-093313" [83fac023-8769-42f1-bb01-7b45b695a20f] Running
	I1025 10:58:30.962357  458353 system_pods.go:89] "kindnet-6tbtt" [9b74e355-e50d-43f8-94b8-43fdbad27e8d] Running
	I1025 10:58:30.962371  458353 system_pods.go:89] "kube-apiserver-no-preload-093313" [5b7a2f41-bfcc-4460-bc30-242d59d2cfa4] Running
	I1025 10:58:30.962376  458353 system_pods.go:89] "kube-controller-manager-no-preload-093313" [890c70e0-54a4-4423-bbd0-245fbbae3273] Running
	I1025 10:58:30.962380  458353 system_pods.go:89] "kube-proxy-vlb79" [9d2476c6-edbd-4e9b-9d21-4a5547f3cdbc] Running
	I1025 10:58:30.962384  458353 system_pods.go:89] "kube-scheduler-no-preload-093313" [6376071f-6220-481e-b3ec-fed60fe4f008] Running
	I1025 10:58:30.962390  458353 system_pods.go:89] "storage-provisioner" [335dab10-1baa-4bca-afa1-0ccae3bddad5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:58:30.962403  458353 retry.go:31] will retry after 358.312048ms: missing components: kube-dns
	I1025 10:58:31.324534  458353 system_pods.go:86] 8 kube-system pods found
	I1025 10:58:31.324573  458353 system_pods.go:89] "coredns-66bc5c9577-c56mp" [ee976d20-a036-4d38-ad57-a502bf3d0ff7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:58:31.324580  458353 system_pods.go:89] "etcd-no-preload-093313" [83fac023-8769-42f1-bb01-7b45b695a20f] Running
	I1025 10:58:31.324586  458353 system_pods.go:89] "kindnet-6tbtt" [9b74e355-e50d-43f8-94b8-43fdbad27e8d] Running
	I1025 10:58:31.324590  458353 system_pods.go:89] "kube-apiserver-no-preload-093313" [5b7a2f41-bfcc-4460-bc30-242d59d2cfa4] Running
	I1025 10:58:31.324595  458353 system_pods.go:89] "kube-controller-manager-no-preload-093313" [890c70e0-54a4-4423-bbd0-245fbbae3273] Running
	I1025 10:58:31.324600  458353 system_pods.go:89] "kube-proxy-vlb79" [9d2476c6-edbd-4e9b-9d21-4a5547f3cdbc] Running
	I1025 10:58:31.324603  458353 system_pods.go:89] "kube-scheduler-no-preload-093313" [6376071f-6220-481e-b3ec-fed60fe4f008] Running
	I1025 10:58:31.324610  458353 system_pods.go:89] "storage-provisioner" [335dab10-1baa-4bca-afa1-0ccae3bddad5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:58:31.324626  458353 retry.go:31] will retry after 433.655988ms: missing components: kube-dns
	I1025 10:58:31.770412  458353 system_pods.go:86] 8 kube-system pods found
	I1025 10:58:31.770443  458353 system_pods.go:89] "coredns-66bc5c9577-c56mp" [ee976d20-a036-4d38-ad57-a502bf3d0ff7] Running
	I1025 10:58:31.770451  458353 system_pods.go:89] "etcd-no-preload-093313" [83fac023-8769-42f1-bb01-7b45b695a20f] Running
	I1025 10:58:31.770455  458353 system_pods.go:89] "kindnet-6tbtt" [9b74e355-e50d-43f8-94b8-43fdbad27e8d] Running
	I1025 10:58:31.770459  458353 system_pods.go:89] "kube-apiserver-no-preload-093313" [5b7a2f41-bfcc-4460-bc30-242d59d2cfa4] Running
	I1025 10:58:31.770465  458353 system_pods.go:89] "kube-controller-manager-no-preload-093313" [890c70e0-54a4-4423-bbd0-245fbbae3273] Running
	I1025 10:58:31.770469  458353 system_pods.go:89] "kube-proxy-vlb79" [9d2476c6-edbd-4e9b-9d21-4a5547f3cdbc] Running
	I1025 10:58:31.770474  458353 system_pods.go:89] "kube-scheduler-no-preload-093313" [6376071f-6220-481e-b3ec-fed60fe4f008] Running
	I1025 10:58:31.770479  458353 system_pods.go:89] "storage-provisioner" [335dab10-1baa-4bca-afa1-0ccae3bddad5] Running
	I1025 10:58:31.770487  458353 system_pods.go:126] duration metric: took 1.042212864s to wait for k8s-apps to be running ...
	I1025 10:58:31.770494  458353 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:58:31.770551  458353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:58:31.795164  458353 system_svc.go:56] duration metric: took 24.658823ms WaitForService to wait for kubelet
	I1025 10:58:31.795189  458353 kubeadm.go:586] duration metric: took 17.445041882s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:58:31.795208  458353 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:58:31.798443  458353 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:58:31.798512  458353 node_conditions.go:123] node cpu capacity is 2
	I1025 10:58:31.798541  458353 node_conditions.go:105] duration metric: took 3.326282ms to run NodePressure ...
	I1025 10:58:31.798568  458353 start.go:241] waiting for startup goroutines ...
	I1025 10:58:31.798600  458353 start.go:246] waiting for cluster config update ...
	I1025 10:58:31.798630  458353 start.go:255] writing updated cluster config ...
	I1025 10:58:31.798944  458353 ssh_runner.go:195] Run: rm -f paused
	I1025 10:58:31.803140  458353 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:58:31.808054  458353 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c56mp" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:58:31.817065  458353 pod_ready.go:94] pod "coredns-66bc5c9577-c56mp" is "Ready"
	I1025 10:58:31.817131  458353 pod_ready.go:86] duration metric: took 9.053792ms for pod "coredns-66bc5c9577-c56mp" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:58:31.820182  458353 pod_ready.go:83] waiting for pod "etcd-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:58:31.825970  458353 pod_ready.go:94] pod "etcd-no-preload-093313" is "Ready"
	I1025 10:58:31.826057  458353 pod_ready.go:86] duration metric: took 5.800585ms for pod "etcd-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:58:31.828967  458353 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:58:31.834607  458353 pod_ready.go:94] pod "kube-apiserver-no-preload-093313" is "Ready"
	I1025 10:58:31.834678  458353 pod_ready.go:86] duration metric: took 5.644695ms for pod "kube-apiserver-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:58:31.837396  458353 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:58:32.207392  458353 pod_ready.go:94] pod "kube-controller-manager-no-preload-093313" is "Ready"
	I1025 10:58:32.207421  458353 pod_ready.go:86] duration metric: took 369.960008ms for pod "kube-controller-manager-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:58:32.408362  458353 pod_ready.go:83] waiting for pod "kube-proxy-vlb79" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:58:32.807204  458353 pod_ready.go:94] pod "kube-proxy-vlb79" is "Ready"
	I1025 10:58:32.807235  458353 pod_ready.go:86] duration metric: took 398.838365ms for pod "kube-proxy-vlb79" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:58:33.008399  458353 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:58:33.408942  458353 pod_ready.go:94] pod "kube-scheduler-no-preload-093313" is "Ready"
	I1025 10:58:33.408969  458353 pod_ready.go:86] duration metric: took 400.540288ms for pod "kube-scheduler-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:58:33.408983  458353 pod_ready.go:40] duration metric: took 1.605815378s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:58:33.490261  458353 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:58:33.493448  458353 out.go:179] * Done! kubectl is now configured to use "no-preload-093313" cluster and "default" namespace by default
	I1025 10:58:31.066615  462579 out.go:252]   - Generating certificates and keys ...
	I1025 10:58:31.066767  462579 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 10:58:31.066848  462579 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 10:58:31.149130  462579 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 10:58:31.903265  462579 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 10:58:31.968055  462579 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 10:58:32.444010  462579 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 10:58:32.588572  462579 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:58:32.588743  462579 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-374679] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 10:58:32.782535  462579 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:58:32.782824  462579 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-374679] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 10:58:33.800991  462579 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:58:34.404644  462579 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 10:58:34.550327  462579 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 10:58:34.550899  462579 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 10:58:34.848915  462579 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 10:58:35.207722  462579 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 10:58:35.610639  462579 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 10:58:35.755903  462579 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 10:58:36.529889  462579 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 10:58:36.530612  462579 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 10:58:36.533215  462579 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 10:58:36.537093  462579 out.go:252]   - Booting up control plane ...
	I1025 10:58:36.537203  462579 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 10:58:36.537291  462579 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 10:58:36.538959  462579 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 10:58:36.554369  462579 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 10:58:36.554760  462579 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 10:58:36.563330  462579 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 10:58:36.563653  462579 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 10:58:36.563879  462579 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 10:58:36.721832  462579 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 10:58:36.721973  462579 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 10:58:37.724417  462579 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.002349577s
	I1025 10:58:37.729020  462579 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 10:58:37.729401  462579 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1025 10:58:37.729511  462579 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 10:58:37.729594  462579 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 10:58:41.663360  462579 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.933233358s
	I1025 10:58:43.630333  462579 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.900789369s
	I1025 10:58:45.266704  462579 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.511257596s
	I1025 10:58:45.298904  462579 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 10:58:45.330856  462579 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 10:58:45.391381  462579 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 10:58:45.391604  462579 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-374679 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 10:58:45.412987  462579 kubeadm.go:318] [bootstrap-token] Using token: fogg6n.y7680cjnbeitk14s
	I1025 10:58:45.415995  462579 out.go:252]   - Configuring RBAC rules ...
	I1025 10:58:45.416131  462579 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 10:58:45.428327  462579 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 10:58:45.447766  462579 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 10:58:45.454658  462579 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 10:58:45.467392  462579 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 10:58:45.485726  462579 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 10:58:45.650883  462579 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 10:58:46.181273  462579 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 10:58:46.649086  462579 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 10:58:46.650466  462579 kubeadm.go:318] 
	I1025 10:58:46.650554  462579 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 10:58:46.650560  462579 kubeadm.go:318] 
	I1025 10:58:46.650642  462579 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 10:58:46.650647  462579 kubeadm.go:318] 
	I1025 10:58:46.650673  462579 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 10:58:46.650735  462579 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 10:58:46.650788  462579 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 10:58:46.650818  462579 kubeadm.go:318] 
	I1025 10:58:46.650875  462579 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 10:58:46.650879  462579 kubeadm.go:318] 
	I1025 10:58:46.650930  462579 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 10:58:46.650934  462579 kubeadm.go:318] 
	I1025 10:58:46.650988  462579 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 10:58:46.651067  462579 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 10:58:46.651138  462579 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 10:58:46.651142  462579 kubeadm.go:318] 
	I1025 10:58:46.651230  462579 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 10:58:46.651311  462579 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 10:58:46.651315  462579 kubeadm.go:318] 
	I1025 10:58:46.651405  462579 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token fogg6n.y7680cjnbeitk14s \
	I1025 10:58:46.651513  462579 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:537c864aeffbd98a53a718d258f7bd1f28aa4a0d15716971a35abdfdaaeb28d5 \
	I1025 10:58:46.651535  462579 kubeadm.go:318] 	--control-plane 
	I1025 10:58:46.651540  462579 kubeadm.go:318] 
	I1025 10:58:46.651629  462579 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 10:58:46.651633  462579 kubeadm.go:318] 
	I1025 10:58:46.651719  462579 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token fogg6n.y7680cjnbeitk14s \
	I1025 10:58:46.651827  462579 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:537c864aeffbd98a53a718d258f7bd1f28aa4a0d15716971a35abdfdaaeb28d5 
	I1025 10:58:46.656270  462579 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1025 10:58:46.656508  462579 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 10:58:46.656617  462579 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 10:58:46.656633  462579 cni.go:84] Creating CNI manager for ""
	I1025 10:58:46.656641  462579 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:58:46.659656  462579 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 10:58:46.662634  462579 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 10:58:46.667030  462579 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 10:58:46.667054  462579 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 10:58:46.680481  462579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 10:58:46.972677  462579 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 10:58:46.972804  462579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:46.972873  462579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-374679 minikube.k8s.io/updated_at=2025_10_25T10_58_46_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689 minikube.k8s.io/name=newest-cni-374679 minikube.k8s.io/primary=true
	I1025 10:58:47.244031  462579 ops.go:34] apiserver oom_adj: -16
	I1025 10:58:47.244135  462579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:47.745150  462579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:48.244802  462579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:48.745006  462579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:49.244850  462579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:49.745142  462579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:50.245038  462579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:50.745219  462579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:51.245122  462579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:58:51.345094  462579 kubeadm.go:1113] duration metric: took 4.372330295s to wait for elevateKubeSystemPrivileges
	I1025 10:58:51.345133  462579 kubeadm.go:402] duration metric: took 20.694794111s to StartCluster
	I1025 10:58:51.345153  462579 settings.go:142] acquiring lock: {Name:mk78071aea90144fb2e8f63b90de4792d7a3522a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:58:51.345228  462579 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:58:51.347328  462579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:58:51.347607  462579 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:58:51.348455  462579 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 10:58:51.348791  462579 config.go:182] Loaded profile config "newest-cni-374679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:58:51.348834  462579 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:58:51.348965  462579 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-374679"
	I1025 10:58:51.348989  462579 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-374679"
	I1025 10:58:51.349025  462579 host.go:66] Checking if "newest-cni-374679" exists ...
	I1025 10:58:51.349601  462579 cli_runner.go:164] Run: docker container inspect newest-cni-374679 --format={{.State.Status}}
	I1025 10:58:51.350866  462579 addons.go:69] Setting default-storageclass=true in profile "newest-cni-374679"
	I1025 10:58:51.351032  462579 out.go:179] * Verifying Kubernetes components...
	I1025 10:58:51.350894  462579 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-374679"
	I1025 10:58:51.351482  462579 cli_runner.go:164] Run: docker container inspect newest-cni-374679 --format={{.State.Status}}
	I1025 10:58:51.354285  462579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:58:51.386393  462579 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:58:51.389315  462579 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:58:51.389342  462579 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:58:51.389414  462579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:58:51.396487  462579 addons.go:238] Setting addon default-storageclass=true in "newest-cni-374679"
	I1025 10:58:51.396535  462579 host.go:66] Checking if "newest-cni-374679" exists ...
	I1025 10:58:51.396953  462579 cli_runner.go:164] Run: docker container inspect newest-cni-374679 --format={{.State.Status}}
	I1025 10:58:51.432362  462579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:58:51.440934  462579 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:58:51.440959  462579 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:58:51.441024  462579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:58:51.467691  462579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:58:51.660298  462579 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 10:58:51.660490  462579 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:58:51.664954  462579 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:58:51.723032  462579 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:58:52.256573  462579 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:58:52.256635  462579 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:58:52.256840  462579 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1025 10:58:52.459252  462579 api_server.go:72] duration metric: took 1.11160409s to wait for apiserver process to appear ...
	I1025 10:58:52.459326  462579 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:58:52.459357  462579 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:58:52.472134  462579 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1025 10:58:52.473327  462579 api_server.go:141] control plane version: v1.34.1
	I1025 10:58:52.473397  462579 api_server.go:131] duration metric: took 14.040249ms to wait for apiserver health ...
	I1025 10:58:52.473420  462579 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:58:52.477603  462579 system_pods.go:59] 8 kube-system pods found
	I1025 10:58:52.477700  462579 system_pods.go:61] "coredns-66bc5c9577-4d24l" [5674f0d2-53d4-4f02-b91b-0e79c61b0c79] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 10:58:52.477732  462579 system_pods.go:61] "etcd-newest-cni-374679" [1492f4ab-00e0-4666-93a7-5426af263e77] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:58:52.477758  462579 system_pods.go:61] "kindnet-qtb6l" [4aad81e0-ec4e-4952-812a-459e61c41122] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 10:58:52.477792  462579 system_pods.go:61] "kube-apiserver-newest-cni-374679" [a8e63617-a996-48d7-8bd5-1d27197e9522] Running
	I1025 10:58:52.477820  462579 system_pods.go:61] "kube-controller-manager-newest-cni-374679" [542d0345-a119-4e95-83a0-97a347312be9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:58:52.477848  462579 system_pods.go:61] "kube-proxy-79b8c" [a627fd5d-c73d-44de-9703-44d8ec7f157c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 10:58:52.477886  462579 system_pods.go:61] "kube-scheduler-newest-cni-374679" [041edb3d-07d6-4a74-b89a-37d705bcafd4] Running
	I1025 10:58:52.477909  462579 system_pods.go:61] "storage-provisioner" [f71da934-4c23-469c-b955-21feda9210a0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 10:58:52.477949  462579 system_pods.go:74] duration metric: took 4.493062ms to wait for pod list to return data ...
	I1025 10:58:52.478016  462579 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:58:52.480434  462579 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 10:58:52.481125  462579 default_sa.go:45] found service account: "default"
	I1025 10:58:52.481149  462579 default_sa.go:55] duration metric: took 3.113456ms for default service account to be created ...
	I1025 10:58:52.481162  462579 kubeadm.go:586] duration metric: took 1.133517313s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 10:58:52.481186  462579 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:58:52.483387  462579 addons.go:514] duration metric: took 1.134535283s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 10:58:52.484217  462579 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:58:52.484288  462579 node_conditions.go:123] node cpu capacity is 2
	I1025 10:58:52.484317  462579 node_conditions.go:105] duration metric: took 3.125214ms to run NodePressure ...
	I1025 10:58:52.484343  462579 start.go:241] waiting for startup goroutines ...
	I1025 10:58:52.761510  462579 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-374679" context rescaled to 1 replicas
	I1025 10:58:52.761556  462579 start.go:246] waiting for cluster config update ...
	I1025 10:58:52.761590  462579 start.go:255] writing updated cluster config ...
	I1025 10:58:52.761900  462579 ssh_runner.go:195] Run: rm -f paused
	I1025 10:58:52.820039  462579 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:58:52.823428  462579 out.go:179] * Done! kubectl is now configured to use "newest-cni-374679" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 10:58:53 newest-cni-374679 crio[840]: time="2025-10-25T10:58:53.087723387Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:58:53 newest-cni-374679 crio[840]: time="2025-10-25T10:58:53.093489404Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ebd7c17b-1fd1-4171-80ce-cf23394ac8a9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:58:53 newest-cni-374679 crio[840]: time="2025-10-25T10:58:53.096582774Z" level=info msg="Ran pod sandbox e301f924cbf92d289e1352e36be64bff67b58625045546354d3b27c8b27ef59c with infra container: kube-system/kindnet-qtb6l/POD" id=ebd7c17b-1fd1-4171-80ce-cf23394ac8a9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:58:53 newest-cni-374679 crio[840]: time="2025-10-25T10:58:53.102877435Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=cddf7bac-cb69-455a-bc7b-967237e4b7ac name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:58:53 newest-cni-374679 crio[840]: time="2025-10-25T10:58:53.106739099Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=773b5b9d-f28f-4fcc-9f00-b89434a5ba1f name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:58:53 newest-cni-374679 crio[840]: time="2025-10-25T10:58:53.111199118Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-79b8c/POD" id=811c4533-6fc3-4a26-a374-04f1902608b5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:58:53 newest-cni-374679 crio[840]: time="2025-10-25T10:58:53.111287431Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:58:53 newest-cni-374679 crio[840]: time="2025-10-25T10:58:53.115500023Z" level=info msg="Creating container: kube-system/kindnet-qtb6l/kindnet-cni" id=745431f6-fe03-49eb-840b-18c4e6c43124 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:58:53 newest-cni-374679 crio[840]: time="2025-10-25T10:58:53.115721145Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:58:53 newest-cni-374679 crio[840]: time="2025-10-25T10:58:53.127499704Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=811c4533-6fc3-4a26-a374-04f1902608b5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:58:53 newest-cni-374679 crio[840]: time="2025-10-25T10:58:53.131736296Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:58:53 newest-cni-374679 crio[840]: time="2025-10-25T10:58:53.133800257Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:58:53 newest-cni-374679 crio[840]: time="2025-10-25T10:58:53.133293791Z" level=info msg="Ran pod sandbox 3dc8d6d9c9fa81438f085442b889b11da0a2f30c75032b3318d9246fab113180 with infra container: kube-system/kube-proxy-79b8c/POD" id=811c4533-6fc3-4a26-a374-04f1902608b5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:58:53 newest-cni-374679 crio[840]: time="2025-10-25T10:58:53.143833923Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=64a66352-9ff9-4724-ac2f-ce1161a9031f name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:58:53 newest-cni-374679 crio[840]: time="2025-10-25T10:58:53.147365261Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=fd2a5219-a002-470b-a7db-4e5d10556951 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:58:53 newest-cni-374679 crio[840]: time="2025-10-25T10:58:53.161715347Z" level=info msg="Creating container: kube-system/kube-proxy-79b8c/kube-proxy" id=b9334d70-f04e-4f3a-b516-c13a5602cdd9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:58:53 newest-cni-374679 crio[840]: time="2025-10-25T10:58:53.161955973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:58:53 newest-cni-374679 crio[840]: time="2025-10-25T10:58:53.170263018Z" level=info msg="Created container e9e38188e174d894d97590ae276888802da4eb3af98158fe1b2f523ec3bee957: kube-system/kindnet-qtb6l/kindnet-cni" id=745431f6-fe03-49eb-840b-18c4e6c43124 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:58:53 newest-cni-374679 crio[840]: time="2025-10-25T10:58:53.180959122Z" level=info msg="Starting container: e9e38188e174d894d97590ae276888802da4eb3af98158fe1b2f523ec3bee957" id=90c2dbeb-980b-4836-a5d1-90fbb47684f4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:58:53 newest-cni-374679 crio[840]: time="2025-10-25T10:58:53.188695183Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:58:53 newest-cni-374679 crio[840]: time="2025-10-25T10:58:53.189271583Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:58:53 newest-cni-374679 crio[840]: time="2025-10-25T10:58:53.19591581Z" level=info msg="Started container" PID=1516 containerID=e9e38188e174d894d97590ae276888802da4eb3af98158fe1b2f523ec3bee957 description=kube-system/kindnet-qtb6l/kindnet-cni id=90c2dbeb-980b-4836-a5d1-90fbb47684f4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e301f924cbf92d289e1352e36be64bff67b58625045546354d3b27c8b27ef59c
	Oct 25 10:58:53 newest-cni-374679 crio[840]: time="2025-10-25T10:58:53.227601068Z" level=info msg="Created container 513e9488f87f34130ab598b253d41cf352bea57e78f55827dd983912b9b2b423: kube-system/kube-proxy-79b8c/kube-proxy" id=b9334d70-f04e-4f3a-b516-c13a5602cdd9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:58:53 newest-cni-374679 crio[840]: time="2025-10-25T10:58:53.22869492Z" level=info msg="Starting container: 513e9488f87f34130ab598b253d41cf352bea57e78f55827dd983912b9b2b423" id=df6c00f5-9cb2-4ba7-8f36-21069ec3bb0f name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:58:53 newest-cni-374679 crio[840]: time="2025-10-25T10:58:53.231924405Z" level=info msg="Started container" PID=1526 containerID=513e9488f87f34130ab598b253d41cf352bea57e78f55827dd983912b9b2b423 description=kube-system/kube-proxy-79b8c/kube-proxy id=df6c00f5-9cb2-4ba7-8f36-21069ec3bb0f name=/runtime.v1.RuntimeService/StartContainer sandboxID=3dc8d6d9c9fa81438f085442b889b11da0a2f30c75032b3318d9246fab113180
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	513e9488f87f3       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   1 second ago        Running             kube-proxy                0                   3dc8d6d9c9fa8       kube-proxy-79b8c                            kube-system
	e9e38188e174d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   1 second ago        Running             kindnet-cni               0                   e301f924cbf92       kindnet-qtb6l                               kube-system
	aa68368a87332       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   16 seconds ago      Running             kube-apiserver            0                   c8b2210c21c05       kube-apiserver-newest-cni-374679            kube-system
	deb8c3f2b4baa       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   16 seconds ago      Running             kube-scheduler            0                   86c83d7172160       kube-scheduler-newest-cni-374679            kube-system
	85de3cfa163b5       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   16 seconds ago      Running             kube-controller-manager   0                   ebdc3c4ccb54d       kube-controller-manager-newest-cni-374679   kube-system
	c8fedf37b1d79       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   16 seconds ago      Running             etcd                      0                   a3212645a2da2       etcd-newest-cni-374679                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-374679
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-374679
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=newest-cni-374679
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_58_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:58:43 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-374679
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:58:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:58:46 +0000   Sat, 25 Oct 2025 10:58:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:58:46 +0000   Sat, 25 Oct 2025 10:58:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:58:46 +0000   Sat, 25 Oct 2025 10:58:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 25 Oct 2025 10:58:46 +0000   Sat, 25 Oct 2025 10:58:38 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-374679
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                913dee82-c4de-49b4-9575-60baba442e3d
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-374679                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10s
	  kube-system                 kindnet-qtb6l                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-374679             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-374679    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-79b8c                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-374679             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 0s                 kube-proxy       
	  Warning  CgroupV1                 17s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  17s (x8 over 17s)  kubelet          Node newest-cni-374679 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17s (x8 over 17s)  kubelet          Node newest-cni-374679 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17s (x8 over 17s)  kubelet          Node newest-cni-374679 status is now: NodeHasSufficientPID
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 8s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8s                 kubelet          Node newest-cni-374679 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s                 kubelet          Node newest-cni-374679 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s                 kubelet          Node newest-cni-374679 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-374679 event: Registered Node newest-cni-374679 in Controller
	
	
	==> dmesg <==
	[Oct25 10:36] overlayfs: idmapped layers are currently not supported
	[ +23.146409] overlayfs: idmapped layers are currently not supported
	[ +12.090113] overlayfs: idmapped layers are currently not supported
	[Oct25 10:37] overlayfs: idmapped layers are currently not supported
	[ +24.992751] overlayfs: idmapped layers are currently not supported
	[Oct25 10:38] overlayfs: idmapped layers are currently not supported
	[Oct25 10:39] overlayfs: idmapped layers are currently not supported
	[Oct25 10:40] overlayfs: idmapped layers are currently not supported
	[Oct25 10:41] overlayfs: idmapped layers are currently not supported
	[Oct25 10:43] overlayfs: idmapped layers are currently not supported
	[Oct25 10:45] overlayfs: idmapped layers are currently not supported
	[ +32.163680] overlayfs: idmapped layers are currently not supported
	[Oct25 10:47] overlayfs: idmapped layers are currently not supported
	[Oct25 10:49] overlayfs: idmapped layers are currently not supported
	[Oct25 10:50] overlayfs: idmapped layers are currently not supported
	[Oct25 10:51] overlayfs: idmapped layers are currently not supported
	[  +5.236078] overlayfs: idmapped layers are currently not supported
	[Oct25 10:52] overlayfs: idmapped layers are currently not supported
	[Oct25 10:53] overlayfs: idmapped layers are currently not supported
	[Oct25 10:54] overlayfs: idmapped layers are currently not supported
	[Oct25 10:55] overlayfs: idmapped layers are currently not supported
	[Oct25 10:56] overlayfs: idmapped layers are currently not supported
	[ +41.501413] overlayfs: idmapped layers are currently not supported
	[Oct25 10:57] overlayfs: idmapped layers are currently not supported
	[Oct25 10:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c8fedf37b1d7971f97270af8efc49b3509470ad7b20469afee6e3c4e61a53c46] <==
	{"level":"warn","ts":"2025-10-25T10:58:40.589450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:40.630201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:40.653444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:40.689166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:40.703463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:40.727085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:40.762174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:40.773538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:40.806278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:40.841730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:40.871072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:40.908405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:40.942591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:40.965910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:41.015248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:41.034228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:41.099584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:41.119922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:41.150858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:41.182747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:41.214467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:41.254456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:41.278965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:41.292786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:58:41.407674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51726","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:58:54 up  2:41,  0 user,  load average: 3.15, 3.35, 2.95
	Linux newest-cni-374679 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e9e38188e174d894d97590ae276888802da4eb3af98158fe1b2f523ec3bee957] <==
	I1025 10:58:53.264715       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:58:53.316223       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 10:58:53.316469       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:58:53.316515       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:58:53.316553       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:58:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:58:53.517531       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:58:53.517596       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:58:53.518042       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:58:53.518645       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [aa68368a873327c45a07b59320a05c34822ba5b678be06ac00e8ab2973e3273e] <==
	I1025 10:58:43.089236       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 10:58:43.090462       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 10:58:43.117972       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:58:43.151577       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:58:43.151969       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 10:58:43.178866       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:58:43.198240       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:58:43.211052       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:58:43.455803       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 10:58:43.492750       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 10:58:43.492848       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:58:44.753370       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:58:44.833646       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:58:44.934515       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 10:58:44.952235       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1025 10:58:44.953837       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:58:44.960560       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:58:45.803133       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:58:46.147080       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:58:46.179149       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 10:58:46.193086       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 10:58:51.096146       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:58:51.101574       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:58:51.548066       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:58:51.843637       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [85de3cfa163b5dd181ecafb36b6c104c24280124b7eae0f6d0ab2f77b3aa7711] <==
	I1025 10:58:50.869649       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 10:58:50.887438       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 10:58:50.887513       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 10:58:50.887947       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 10:58:50.887994       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:58:50.888019       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:58:50.888051       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:58:50.888081       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:58:50.888222       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 10:58:50.888555       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 10:58:50.889049       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:58:50.889091       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 10:58:50.892517       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 10:58:50.892570       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 10:58:50.893414       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 10:58:50.894081       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:58:50.894124       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 10:58:50.894159       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 10:58:50.894186       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 10:58:50.894191       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 10:58:50.894196       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 10:58:50.899963       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:58:50.903544       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-374679" podCIDRs=["10.42.0.0/24"]
	I1025 10:58:50.904711       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 10:58:50.912994       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [513e9488f87f34130ab598b253d41cf352bea57e78f55827dd983912b9b2b423] <==
	I1025 10:58:53.311031       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:58:53.395911       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:58:53.497721       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:58:53.497760       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 10:58:53.497849       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:58:53.531530       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:58:53.531649       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:58:53.622835       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:58:53.623208       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:58:53.623231       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:58:53.624480       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:58:53.624503       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:58:53.624883       1 config.go:200] "Starting service config controller"
	I1025 10:58:53.624891       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:58:53.625199       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:58:53.625208       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:58:53.625739       1 config.go:309] "Starting node config controller"
	I1025 10:58:53.625750       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:58:53.625757       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:58:53.724660       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 10:58:53.725781       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:58:53.725807       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [deb8c3f2b4baae01cb339600700b77e8d0d1d2e8ee6492043e5781166dc9c01b] <==
	I1025 10:58:43.601954       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:58:43.604257       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:58:43.604299       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:58:43.609267       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:58:43.609333       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1025 10:58:43.628692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 10:58:43.628834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 10:58:43.643681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:58:43.643867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 10:58:43.644043       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:58:43.645626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 10:58:43.647600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 10:58:43.647970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 10:58:43.648073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:58:43.648179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 10:58:43.648242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:58:43.648357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 10:58:43.648413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:58:43.650972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 10:58:43.651334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 10:58:43.651498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:58:43.651579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 10:58:43.651723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 10:58:43.651905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1025 10:58:45.305582       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:58:46 newest-cni-374679 kubelet[1309]: I1025 10:58:46.622159    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3648083f0b2e715978038b12b0b4dbfb-usr-share-ca-certificates\") pod \"kube-apiserver-newest-cni-374679\" (UID: \"3648083f0b2e715978038b12b0b4dbfb\") " pod="kube-system/kube-apiserver-newest-cni-374679"
	Oct 25 10:58:46 newest-cni-374679 kubelet[1309]: I1025 10:58:46.622176    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad45c4da236a13c7b926abe35826d680-kubeconfig\") pod \"kube-scheduler-newest-cni-374679\" (UID: \"ad45c4da236a13c7b926abe35826d680\") " pod="kube-system/kube-scheduler-newest-cni-374679"
	Oct 25 10:58:47 newest-cni-374679 kubelet[1309]: I1025 10:58:47.301531    1309 apiserver.go:52] "Watching apiserver"
	Oct 25 10:58:47 newest-cni-374679 kubelet[1309]: I1025 10:58:47.321394    1309 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 25 10:58:47 newest-cni-374679 kubelet[1309]: I1025 10:58:47.396683    1309 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-374679"
	Oct 25 10:58:47 newest-cni-374679 kubelet[1309]: E1025 10:58:47.406230    1309 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-374679\" already exists" pod="kube-system/kube-scheduler-newest-cni-374679"
	Oct 25 10:58:47 newest-cni-374679 kubelet[1309]: I1025 10:58:47.434190    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-374679" podStartSLOduration=1.434174456 podStartE2EDuration="1.434174456s" podCreationTimestamp="2025-10-25 10:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:58:47.423136778 +0000 UTC m=+1.338736025" watchObservedRunningTime="2025-10-25 10:58:47.434174456 +0000 UTC m=+1.349773695"
	Oct 25 10:58:47 newest-cni-374679 kubelet[1309]: I1025 10:58:47.447868    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-374679" podStartSLOduration=1.447852003 podStartE2EDuration="1.447852003s" podCreationTimestamp="2025-10-25 10:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:58:47.435781839 +0000 UTC m=+1.351381094" watchObservedRunningTime="2025-10-25 10:58:47.447852003 +0000 UTC m=+1.363451258"
	Oct 25 10:58:47 newest-cni-374679 kubelet[1309]: I1025 10:58:47.448199    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-374679" podStartSLOduration=3.448172917 podStartE2EDuration="3.448172917s" podCreationTimestamp="2025-10-25 10:58:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:58:47.447622832 +0000 UTC m=+1.363222079" watchObservedRunningTime="2025-10-25 10:58:47.448172917 +0000 UTC m=+1.363772164"
	Oct 25 10:58:47 newest-cni-374679 kubelet[1309]: I1025 10:58:47.473741    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-374679" podStartSLOduration=1.47372225 podStartE2EDuration="1.47372225s" podCreationTimestamp="2025-10-25 10:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:58:47.460680352 +0000 UTC m=+1.376279607" watchObservedRunningTime="2025-10-25 10:58:47.47372225 +0000 UTC m=+1.389321497"
	Oct 25 10:58:50 newest-cni-374679 kubelet[1309]: I1025 10:58:50.977902    1309 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 25 10:58:50 newest-cni-374679 kubelet[1309]: I1025 10:58:50.978937    1309 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 25 10:58:51 newest-cni-374679 kubelet[1309]: E1025 10:58:51.899222    1309 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-qtb6l\" is forbidden: User \"system:node:newest-cni-374679\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-374679' and this object" podUID="4aad81e0-ec4e-4952-812a-459e61c41122" pod="kube-system/kindnet-qtb6l"
	Oct 25 10:58:51 newest-cni-374679 kubelet[1309]: E1025 10:58:51.899340    1309 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:newest-cni-374679\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-374679' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 25 10:58:51 newest-cni-374679 kubelet[1309]: I1025 10:58:51.962814    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4aad81e0-ec4e-4952-812a-459e61c41122-cni-cfg\") pod \"kindnet-qtb6l\" (UID: \"4aad81e0-ec4e-4952-812a-459e61c41122\") " pod="kube-system/kindnet-qtb6l"
	Oct 25 10:58:51 newest-cni-374679 kubelet[1309]: I1025 10:58:51.962871    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4aad81e0-ec4e-4952-812a-459e61c41122-xtables-lock\") pod \"kindnet-qtb6l\" (UID: \"4aad81e0-ec4e-4952-812a-459e61c41122\") " pod="kube-system/kindnet-qtb6l"
	Oct 25 10:58:51 newest-cni-374679 kubelet[1309]: I1025 10:58:51.962895    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a627fd5d-c73d-44de-9703-44d8ec7f157c-kube-proxy\") pod \"kube-proxy-79b8c\" (UID: \"a627fd5d-c73d-44de-9703-44d8ec7f157c\") " pod="kube-system/kube-proxy-79b8c"
	Oct 25 10:58:51 newest-cni-374679 kubelet[1309]: I1025 10:58:51.962912    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a627fd5d-c73d-44de-9703-44d8ec7f157c-lib-modules\") pod \"kube-proxy-79b8c\" (UID: \"a627fd5d-c73d-44de-9703-44d8ec7f157c\") " pod="kube-system/kube-proxy-79b8c"
	Oct 25 10:58:51 newest-cni-374679 kubelet[1309]: I1025 10:58:51.962934    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n2dj\" (UniqueName: \"kubernetes.io/projected/a627fd5d-c73d-44de-9703-44d8ec7f157c-kube-api-access-7n2dj\") pod \"kube-proxy-79b8c\" (UID: \"a627fd5d-c73d-44de-9703-44d8ec7f157c\") " pod="kube-system/kube-proxy-79b8c"
	Oct 25 10:58:51 newest-cni-374679 kubelet[1309]: I1025 10:58:51.962958    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4aad81e0-ec4e-4952-812a-459e61c41122-lib-modules\") pod \"kindnet-qtb6l\" (UID: \"4aad81e0-ec4e-4952-812a-459e61c41122\") " pod="kube-system/kindnet-qtb6l"
	Oct 25 10:58:51 newest-cni-374679 kubelet[1309]: I1025 10:58:51.962973    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj7j7\" (UniqueName: \"kubernetes.io/projected/4aad81e0-ec4e-4952-812a-459e61c41122-kube-api-access-cj7j7\") pod \"kindnet-qtb6l\" (UID: \"4aad81e0-ec4e-4952-812a-459e61c41122\") " pod="kube-system/kindnet-qtb6l"
	Oct 25 10:58:51 newest-cni-374679 kubelet[1309]: I1025 10:58:51.962991    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a627fd5d-c73d-44de-9703-44d8ec7f157c-xtables-lock\") pod \"kube-proxy-79b8c\" (UID: \"a627fd5d-c73d-44de-9703-44d8ec7f157c\") " pod="kube-system/kube-proxy-79b8c"
	Oct 25 10:58:53 newest-cni-374679 kubelet[1309]: I1025 10:58:53.055255    1309 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 25 10:58:53 newest-cni-374679 kubelet[1309]: I1025 10:58:53.453500    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-79b8c" podStartSLOduration=2.453471546 podStartE2EDuration="2.453471546s" podCreationTimestamp="2025-10-25 10:58:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:58:53.43611857 +0000 UTC m=+7.351717817" watchObservedRunningTime="2025-10-25 10:58:53.453471546 +0000 UTC m=+7.369070793"
	Oct 25 10:58:54 newest-cni-374679 kubelet[1309]: I1025 10:58:54.122797    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-qtb6l" podStartSLOduration=3.122778989 podStartE2EDuration="3.122778989s" podCreationTimestamp="2025-10-25 10:58:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:58:53.455487491 +0000 UTC m=+7.371086795" watchObservedRunningTime="2025-10-25 10:58:54.122778989 +0000 UTC m=+8.038378236"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-374679 -n newest-cni-374679
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-374679 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-4d24l storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-374679 describe pod coredns-66bc5c9577-4d24l storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-374679 describe pod coredns-66bc5c9577-4d24l storage-provisioner: exit status 1 (81.880416ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-4d24l" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-374679 describe pod coredns-66bc5c9577-4d24l storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (7.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-374679 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-374679 --alsologtostderr -v=1: exit status 80 (2.366146223s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-374679 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:59:18.245179  470156 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:59:18.250187  470156 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:59:18.250237  470156 out.go:374] Setting ErrFile to fd 2...
	I1025 10:59:18.250258  470156 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:59:18.250589  470156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:59:18.250910  470156 out.go:368] Setting JSON to false
	I1025 10:59:18.250972  470156 mustload.go:65] Loading cluster: newest-cni-374679
	I1025 10:59:18.251406  470156 config.go:182] Loaded profile config "newest-cni-374679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:59:18.251907  470156 cli_runner.go:164] Run: docker container inspect newest-cni-374679 --format={{.State.Status}}
	I1025 10:59:18.292237  470156 host.go:66] Checking if "newest-cni-374679" exists ...
	I1025 10:59:18.292553  470156 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:59:18.413129  470156 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-25 10:59:18.399976852 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:59:18.413910  470156 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-374679 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 10:59:18.426362  470156 out.go:179] * Pausing node newest-cni-374679 ... 
	I1025 10:59:18.429150  470156 host.go:66] Checking if "newest-cni-374679" exists ...
	I1025 10:59:18.429525  470156 ssh_runner.go:195] Run: systemctl --version
	I1025 10:59:18.429583  470156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:59:18.455784  470156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:59:18.570260  470156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:59:18.589615  470156 pause.go:52] kubelet running: true
	I1025 10:59:18.589690  470156 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:59:19.033303  470156 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:59:19.033397  470156 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:59:19.238850  470156 cri.go:89] found id: "8843eb30915a738f33777e68d8a34d76afae84adaf15a489d5e94aee3f640a5d"
	I1025 10:59:19.238871  470156 cri.go:89] found id: "8be40753d70890608b2a489d9aa5f5bf7ab8c112142065d69f7262b75364d774"
	I1025 10:59:19.238875  470156 cri.go:89] found id: "8dd99f23a5130e7f746756316786e7365b2eac6f3b2500b3498d864236737f92"
	I1025 10:59:19.238879  470156 cri.go:89] found id: "ead41b389f560135dd1912a08ba529d0f7ff2d1d41c70eb5d5b61f81dd410d6d"
	I1025 10:59:19.238882  470156 cri.go:89] found id: "4fff872c680ae750b7165d91452f79ef43d35a25038ab06b1ebec4e7bdd2f138"
	I1025 10:59:19.238885  470156 cri.go:89] found id: "6385714248ed5135738e4519a9a7ba1b7a7684bb2deddf78459d3ce4a2c36c29"
	I1025 10:59:19.238889  470156 cri.go:89] found id: ""
	I1025 10:59:19.238937  470156 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:59:19.262746  470156 retry.go:31] will retry after 227.299224ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:59:19Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:59:19.491211  470156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:59:19.514403  470156 pause.go:52] kubelet running: false
	I1025 10:59:19.514527  470156 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:59:19.765673  470156 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:59:19.765803  470156 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:59:19.873812  470156 cri.go:89] found id: "8843eb30915a738f33777e68d8a34d76afae84adaf15a489d5e94aee3f640a5d"
	I1025 10:59:19.873881  470156 cri.go:89] found id: "8be40753d70890608b2a489d9aa5f5bf7ab8c112142065d69f7262b75364d774"
	I1025 10:59:19.873900  470156 cri.go:89] found id: "8dd99f23a5130e7f746756316786e7365b2eac6f3b2500b3498d864236737f92"
	I1025 10:59:19.873922  470156 cri.go:89] found id: "ead41b389f560135dd1912a08ba529d0f7ff2d1d41c70eb5d5b61f81dd410d6d"
	I1025 10:59:19.873947  470156 cri.go:89] found id: "4fff872c680ae750b7165d91452f79ef43d35a25038ab06b1ebec4e7bdd2f138"
	I1025 10:59:19.873970  470156 cri.go:89] found id: "6385714248ed5135738e4519a9a7ba1b7a7684bb2deddf78459d3ce4a2c36c29"
	I1025 10:59:19.874007  470156 cri.go:89] found id: ""
	I1025 10:59:19.874089  470156 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:59:19.889654  470156 retry.go:31] will retry after 318.682369ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:59:19Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:59:20.209220  470156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:59:20.225034  470156 pause.go:52] kubelet running: false
	I1025 10:59:20.225118  470156 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:59:20.411646  470156 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:59:20.411740  470156 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:59:20.489500  470156 cri.go:89] found id: "8843eb30915a738f33777e68d8a34d76afae84adaf15a489d5e94aee3f640a5d"
	I1025 10:59:20.489534  470156 cri.go:89] found id: "8be40753d70890608b2a489d9aa5f5bf7ab8c112142065d69f7262b75364d774"
	I1025 10:59:20.489540  470156 cri.go:89] found id: "8dd99f23a5130e7f746756316786e7365b2eac6f3b2500b3498d864236737f92"
	I1025 10:59:20.489544  470156 cri.go:89] found id: "ead41b389f560135dd1912a08ba529d0f7ff2d1d41c70eb5d5b61f81dd410d6d"
	I1025 10:59:20.489547  470156 cri.go:89] found id: "4fff872c680ae750b7165d91452f79ef43d35a25038ab06b1ebec4e7bdd2f138"
	I1025 10:59:20.489551  470156 cri.go:89] found id: "6385714248ed5135738e4519a9a7ba1b7a7684bb2deddf78459d3ce4a2c36c29"
	I1025 10:59:20.489554  470156 cri.go:89] found id: ""
	I1025 10:59:20.489605  470156 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:59:20.511706  470156 out.go:203] 
	W1025 10:59:20.514685  470156 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:59:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:59:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 10:59:20.514708  470156 out.go:285] * 
	* 
	W1025 10:59:20.520273  470156 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 10:59:20.523324  470156 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-374679 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-374679
helpers_test.go:243: (dbg) docker inspect newest-cni-374679:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "132f6b53f3213a69fe3b488d3450493a65028241a62ba6d3a53b4d721e1e148d",
	        "Created": "2025-10-25T10:58:18.030527549Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 466908,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:58:57.24994384Z",
	            "FinishedAt": "2025-10-25T10:58:56.184333988Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/132f6b53f3213a69fe3b488d3450493a65028241a62ba6d3a53b4d721e1e148d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/132f6b53f3213a69fe3b488d3450493a65028241a62ba6d3a53b4d721e1e148d/hostname",
	        "HostsPath": "/var/lib/docker/containers/132f6b53f3213a69fe3b488d3450493a65028241a62ba6d3a53b4d721e1e148d/hosts",
	        "LogPath": "/var/lib/docker/containers/132f6b53f3213a69fe3b488d3450493a65028241a62ba6d3a53b4d721e1e148d/132f6b53f3213a69fe3b488d3450493a65028241a62ba6d3a53b4d721e1e148d-json.log",
	        "Name": "/newest-cni-374679",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-374679:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-374679",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "132f6b53f3213a69fe3b488d3450493a65028241a62ba6d3a53b4d721e1e148d",
	                "LowerDir": "/var/lib/docker/overlay2/1178d1621d712233b729c8b344a40f947ea6d8f0eb289643c9931db7c67e1eb5-init/diff:/var/lib/docker/overlay2/a746402fafa86965bd9394d930bb96ec16982037f9f5e7f4f1de6d09576ff851/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1178d1621d712233b729c8b344a40f947ea6d8f0eb289643c9931db7c67e1eb5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1178d1621d712233b729c8b344a40f947ea6d8f0eb289643c9931db7c67e1eb5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1178d1621d712233b729c8b344a40f947ea6d8f0eb289643c9931db7c67e1eb5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-374679",
	                "Source": "/var/lib/docker/volumes/newest-cni-374679/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-374679",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-374679",
	                "name.minikube.sigs.k8s.io": "newest-cni-374679",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "db94d63a5c2c2bbbd875be7ac9c0df3cc507c18ba5b8df549e4ce480965c2554",
	            "SandboxKey": "/var/run/docker/netns/db94d63a5c2c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-374679": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:d4:b2:2f:15:19",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "58611ffe5362d6a9d68586194cffae78efe127e4ab53288bcccd59ddf919e4bd",
	                    "EndpointID": "d3139fa5030c00ed625f15626e0d45265a00c113db4a2568f279cff03dee5ead",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-374679",
	                        "132f6b53f321"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-374679 -n newest-cni-374679
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-374679 -n newest-cni-374679: exit status 2 (429.59091ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-374679 logs -n 25
E1025 10:59:21.255555  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-374679 logs -n 25: (1.359719534s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-348342 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │                     │
	│ stop    │ -p embed-certs-348342 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-348342 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:56 UTC │
	│ start   │ -p embed-certs-348342 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:57 UTC │
	│ image   │ default-k8s-diff-port-223394 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ pause   │ -p default-k8s-diff-port-223394 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-223394                                                                                                                                                                                                               │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ delete  │ -p default-k8s-diff-port-223394                                                                                                                                                                                                               │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ delete  │ -p disable-driver-mounts-487220                                                                                                                                                                                                               │ disable-driver-mounts-487220 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ start   │ -p no-preload-093313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:58 UTC │
	│ image   │ embed-certs-348342 image list --format=json                                                                                                                                                                                                   │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ pause   │ -p embed-certs-348342 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │                     │
	│ delete  │ -p embed-certs-348342                                                                                                                                                                                                                         │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ delete  │ -p embed-certs-348342                                                                                                                                                                                                                         │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ start   │ -p newest-cni-374679 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ addons  │ enable metrics-server -p no-preload-093313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │                     │
	│ stop    │ -p no-preload-093313 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ addons  │ enable metrics-server -p newest-cni-374679 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │                     │
	│ stop    │ -p newest-cni-374679 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ addons  │ enable dashboard -p newest-cni-374679 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ start   │ -p newest-cni-374679 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:59 UTC │
	│ addons  │ enable dashboard -p no-preload-093313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ start   │ -p no-preload-093313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │                     │
	│ image   │ newest-cni-374679 image list --format=json                                                                                                                                                                                                    │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:59 UTC │ 25 Oct 25 10:59 UTC │
	│ pause   │ -p newest-cni-374679 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:58:58
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:58:58.146917  467402 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:58:58.147129  467402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:58:58.147155  467402 out.go:374] Setting ErrFile to fd 2...
	I1025 10:58:58.147176  467402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:58:58.147467  467402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:58:58.147901  467402 out.go:368] Setting JSON to false
	I1025 10:58:58.148805  467402 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9690,"bootTime":1761380249,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 10:58:58.148901  467402 start.go:141] virtualization:  
	I1025 10:58:58.151598  467402 out.go:179] * [no-preload-093313] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:58:58.155312  467402 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:58:58.155402  467402 notify.go:220] Checking for updates...
	I1025 10:58:58.161340  467402 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:58:58.164175  467402 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:58:58.167166  467402 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 10:58:58.170146  467402 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:58:58.173010  467402 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:58:58.176306  467402 config.go:182] Loaded profile config "no-preload-093313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:58:58.176900  467402 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:58:58.208577  467402 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:58:58.208703  467402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:58:58.268659  467402 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-25 10:58:58.259943051 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:58:58.268761  467402 docker.go:318] overlay module found
	I1025 10:58:58.271851  467402 out.go:179] * Using the docker driver based on existing profile
	I1025 10:58:58.274711  467402 start.go:305] selected driver: docker
	I1025 10:58:58.274732  467402 start.go:925] validating driver "docker" against &{Name:no-preload-093313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-093313 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:58:58.274828  467402 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:58:58.275539  467402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:58:58.331815  467402 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-25 10:58:58.322732407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:58:58.332164  467402 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:58:58.332197  467402 cni.go:84] Creating CNI manager for ""
	I1025 10:58:58.332261  467402 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:58:58.332302  467402 start.go:349] cluster config:
	{Name:no-preload-093313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-093313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:58:58.337401  467402 out.go:179] * Starting "no-preload-093313" primary control-plane node in "no-preload-093313" cluster
	I1025 10:58:58.340201  467402 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:58:58.343136  467402 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:58:58.346112  467402 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:58:58.346213  467402 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:58:58.346594  467402 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/config.json ...
	I1025 10:58:58.346897  467402 cache.go:107] acquiring lock: {Name:mke50a780b6f2fd20bf0f3807e5c55f2165bbc2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:58:58.346979  467402 cache.go:115] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 10:58:58.346988  467402 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 109.695µs
	I1025 10:58:58.347001  467402 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 10:58:58.347012  467402 cache.go:107] acquiring lock: {Name:mk6e894f2fc5a822328f2889957353638b611d87 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:58:58.347044  467402 cache.go:115] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1025 10:58:58.347049  467402 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 38.261µs
	I1025 10:58:58.347055  467402 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1025 10:58:58.347079  467402 cache.go:107] acquiring lock: {Name:mk31460a278f5ce669dba0a3edc67dec38888d3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:58:58.347111  467402 cache.go:115] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1025 10:58:58.347116  467402 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 38.572µs
	I1025 10:58:58.347121  467402 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1025 10:58:58.347130  467402 cache.go:107] acquiring lock: {Name:mk4eab06b911708d94fc84824aa5eaf12c5f728f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:58:58.347155  467402 cache.go:115] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1025 10:58:58.347160  467402 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 31.114µs
	I1025 10:58:58.347165  467402 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1025 10:58:58.347174  467402 cache.go:107] acquiring lock: {Name:mk0fabb771ebb58b343ccbfcf727bcc4ba36d3bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:58:58.347205  467402 cache.go:115] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1025 10:58:58.347210  467402 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 36.849µs
	I1025 10:58:58.347216  467402 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1025 10:58:58.347226  467402 cache.go:107] acquiring lock: {Name:mk9b73d996269c05e36f39d743e660929113e3bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:58:58.347256  467402 cache.go:115] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1025 10:58:58.347261  467402 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 36.89µs
	I1025 10:58:58.347267  467402 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1025 10:58:58.347279  467402 cache.go:107] acquiring lock: {Name:mka63f62ad185c4a0c57416430877cf896f4796b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:58:58.347307  467402 cache.go:115] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1025 10:58:58.347316  467402 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 33.757µs
	I1025 10:58:58.347322  467402 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1025 10:58:58.347331  467402 cache.go:107] acquiring lock: {Name:mk3432d572d15dfd7f5ddfb6ca632d44b3f5c29a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:58:58.347356  467402 cache.go:115] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1025 10:58:58.347360  467402 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 31.188µs
	I1025 10:58:58.347366  467402 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1025 10:58:58.347371  467402 cache.go:87] Successfully saved all images to host disk.
	I1025 10:58:58.365050  467402 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:58:58.365069  467402 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:58:58.365083  467402 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:58:58.365115  467402 start.go:360] acquireMachinesLock for no-preload-093313: {Name:mk08df2ba22812bd327cf8f3a536e0d3054c6132 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:58:58.365170  467402 start.go:364] duration metric: took 38.606µs to acquireMachinesLock for "no-preload-093313"
	I1025 10:58:58.365190  467402 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:58:58.365195  467402 fix.go:54] fixHost starting: 
	I1025 10:58:58.365458  467402 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Status}}
	I1025 10:58:58.382727  467402 fix.go:112] recreateIfNeeded on no-preload-093313: state=Stopped err=<nil>
	W1025 10:58:58.382764  467402 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:58:57.217805  466783 out.go:252] * Restarting existing docker container for "newest-cni-374679" ...
	I1025 10:58:57.217915  466783 cli_runner.go:164] Run: docker start newest-cni-374679
	I1025 10:58:57.473037  466783 cli_runner.go:164] Run: docker container inspect newest-cni-374679 --format={{.State.Status}}
	I1025 10:58:57.508524  466783 kic.go:430] container "newest-cni-374679" state is running.
	I1025 10:58:57.508920  466783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-374679
	I1025 10:58:57.536177  466783 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/config.json ...
	I1025 10:58:57.536407  466783 machine.go:93] provisionDockerMachine start ...
	I1025 10:58:57.536468  466783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:58:57.561366  466783 main.go:141] libmachine: Using SSH client type: native
	I1025 10:58:57.561685  466783 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1025 10:58:57.561695  466783 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:58:57.563209  466783 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 10:59:00.713743  466783 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-374679
	
	I1025 10:59:00.713766  466783 ubuntu.go:182] provisioning hostname "newest-cni-374679"
	I1025 10:59:00.713841  466783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:59:00.731426  466783 main.go:141] libmachine: Using SSH client type: native
	I1025 10:59:00.731749  466783 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1025 10:59:00.731767  466783 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-374679 && echo "newest-cni-374679" | sudo tee /etc/hostname
	I1025 10:59:00.887202  466783 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-374679
	
	I1025 10:59:00.887274  466783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:59:00.904700  466783 main.go:141] libmachine: Using SSH client type: native
	I1025 10:59:00.905030  466783 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1025 10:59:00.905053  466783 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-374679' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-374679/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-374679' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:59:01.054552  466783 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:59:01.054582  466783 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:59:01.054611  466783 ubuntu.go:190] setting up certificates
	I1025 10:59:01.054620  466783 provision.go:84] configureAuth start
	I1025 10:59:01.054693  466783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-374679
	I1025 10:59:01.073064  466783 provision.go:143] copyHostCerts
	I1025 10:59:01.073134  466783 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:59:01.073148  466783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:59:01.073229  466783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:59:01.073339  466783 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:59:01.073349  466783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:59:01.073378  466783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:59:01.073491  466783 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:59:01.073503  466783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:59:01.073529  466783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:59:01.073595  466783 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.newest-cni-374679 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-374679]
	I1025 10:59:01.651167  466783 provision.go:177] copyRemoteCerts
	I1025 10:59:01.651248  466783 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:59:01.651290  466783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:59:01.671864  466783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:59:01.782691  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:59:01.809247  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:59:01.834115  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:59:01.854945  466783 provision.go:87] duration metric: took 800.30212ms to configureAuth
	I1025 10:59:01.854976  466783 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:59:01.855174  466783 config.go:182] Loaded profile config "newest-cni-374679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:59:01.855296  466783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:59:01.873102  466783 main.go:141] libmachine: Using SSH client type: native
	I1025 10:59:01.873416  466783 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1025 10:59:01.873440  466783 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:58:58.386115  467402 out.go:252] * Restarting existing docker container for "no-preload-093313" ...
	I1025 10:58:58.386206  467402 cli_runner.go:164] Run: docker start no-preload-093313
	I1025 10:58:58.649901  467402 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Status}}
	I1025 10:58:58.671381  467402 kic.go:430] container "no-preload-093313" state is running.
	I1025 10:58:58.671765  467402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-093313
	I1025 10:58:58.697667  467402 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/config.json ...
	I1025 10:58:58.697898  467402 machine.go:93] provisionDockerMachine start ...
	I1025 10:58:58.697965  467402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:58:58.719794  467402 main.go:141] libmachine: Using SSH client type: native
	I1025 10:58:58.720103  467402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1025 10:58:58.720112  467402 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:58:58.720977  467402 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 10:59:01.895260  467402 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-093313
	
	I1025 10:59:01.895287  467402 ubuntu.go:182] provisioning hostname "no-preload-093313"
	I1025 10:59:01.895350  467402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:59:01.916836  467402 main.go:141] libmachine: Using SSH client type: native
	I1025 10:59:01.917152  467402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1025 10:59:01.917175  467402 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-093313 && echo "no-preload-093313" | sudo tee /etc/hostname
	I1025 10:59:02.094810  467402 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-093313
	
	I1025 10:59:02.094966  467402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:59:02.120787  467402 main.go:141] libmachine: Using SSH client type: native
	I1025 10:59:02.121124  467402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1025 10:59:02.121148  467402 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-093313' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-093313/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-093313' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:59:02.290339  467402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:59:02.290391  467402 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:59:02.290413  467402 ubuntu.go:190] setting up certificates
	I1025 10:59:02.290423  467402 provision.go:84] configureAuth start
	I1025 10:59:02.290500  467402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-093313
	I1025 10:59:02.313480  467402 provision.go:143] copyHostCerts
	I1025 10:59:02.313554  467402 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:59:02.313585  467402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:59:02.313656  467402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:59:02.313766  467402 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:59:02.313777  467402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:59:02.313800  467402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:59:02.313863  467402 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:59:02.313874  467402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:59:02.313895  467402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:59:02.314027  467402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.no-preload-093313 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-093313]
	I1025 10:59:02.600083  467402 provision.go:177] copyRemoteCerts
	I1025 10:59:02.600217  467402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:59:02.600310  467402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:59:02.622490  467402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:59:02.730539  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:59:02.750439  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:59:02.769214  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:59:02.789907  467402 provision.go:87] duration metric: took 499.461539ms to configureAuth
	I1025 10:59:02.789934  467402 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:59:02.790155  467402 config.go:182] Loaded profile config "no-preload-093313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:59:02.790269  467402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:59:02.809327  467402 main.go:141] libmachine: Using SSH client type: native
	I1025 10:59:02.809652  467402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1025 10:59:02.809676  467402 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:59:02.221351  466783 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:59:02.221375  466783 machine.go:96] duration metric: took 4.684958313s to provisionDockerMachine
	I1025 10:59:02.221386  466783 start.go:293] postStartSetup for "newest-cni-374679" (driver="docker")
	I1025 10:59:02.221397  466783 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:59:02.221481  466783 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:59:02.221535  466783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:59:02.242025  466783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:59:02.359700  466783 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:59:02.363946  466783 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:59:02.363975  466783 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:59:02.363986  466783 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:59:02.364048  466783 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:59:02.364141  466783 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:59:02.364245  466783 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:59:02.373182  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:59:02.391066  466783 start.go:296] duration metric: took 169.664409ms for postStartSetup
	I1025 10:59:02.391157  466783 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:59:02.391197  466783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:59:02.415658  466783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:59:02.523539  466783 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:59:02.529500  466783 fix.go:56] duration metric: took 5.335644302s for fixHost
	I1025 10:59:02.529522  466783 start.go:83] releasing machines lock for "newest-cni-374679", held for 5.335691695s
	I1025 10:59:02.529589  466783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-374679
	I1025 10:59:02.552943  466783 ssh_runner.go:195] Run: cat /version.json
	I1025 10:59:02.552996  466783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:59:02.553272  466783 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:59:02.553319  466783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:59:02.575523  466783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:59:02.586707  466783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:59:02.698809  466783 ssh_runner.go:195] Run: systemctl --version
	I1025 10:59:02.789195  466783 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:59:02.840805  466783 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:59:02.846988  466783 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:59:02.847051  466783 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:59:02.856863  466783 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:59:02.856891  466783 start.go:495] detecting cgroup driver to use...
	I1025 10:59:02.856924  466783 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:59:02.856983  466783 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:59:02.884213  466783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:59:02.902607  466783 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:59:02.902700  466783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:59:02.921828  466783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:59:02.936311  466783 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:59:03.073680  466783 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:59:03.242011  466783 docker.go:234] disabling docker service ...
	I1025 10:59:03.242078  466783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:59:03.257649  466783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:59:03.272894  466783 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:59:03.420440  466783 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:59:03.558798  466783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:59:03.574511  466783 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:59:03.590902  466783 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:59:03.590975  466783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:03.600754  466783 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:59:03.600820  466783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:03.610621  466783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:03.621366  466783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:03.631980  466783 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:59:03.640971  466783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:03.651147  466783 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:03.661201  466783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:03.670847  466783 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:59:03.679701  466783 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:59:03.689569  466783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:59:03.818812  466783 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:59:03.977196  466783 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:59:03.977317  466783 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:59:03.986458  466783 start.go:563] Will wait 60s for crictl version
	I1025 10:59:03.986539  466783 ssh_runner.go:195] Run: which crictl
	I1025 10:59:03.991909  466783 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:59:04.024356  466783 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:59:04.024457  466783 ssh_runner.go:195] Run: crio --version
	I1025 10:59:04.069332  466783 ssh_runner.go:195] Run: crio --version
	I1025 10:59:04.123204  466783 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:59:04.126084  466783 cli_runner.go:164] Run: docker network inspect newest-cni-374679 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:59:04.154911  466783 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 10:59:04.159342  466783 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:59:04.172111  466783 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1025 10:59:03.188045  467402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:59:03.188070  467402 machine.go:96] duration metric: took 4.490162614s to provisionDockerMachine
	I1025 10:59:03.188082  467402 start.go:293] postStartSetup for "no-preload-093313" (driver="docker")
	I1025 10:59:03.188156  467402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:59:03.188225  467402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:59:03.188275  467402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:59:03.215558  467402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:59:03.326637  467402 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:59:03.330668  467402 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:59:03.330696  467402 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:59:03.330711  467402 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:59:03.330765  467402 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:59:03.330847  467402 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:59:03.330956  467402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:59:03.343193  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:59:03.369684  467402 start.go:296] duration metric: took 181.585985ms for postStartSetup
	I1025 10:59:03.369849  467402 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:59:03.369923  467402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:59:03.391648  467402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:59:03.495509  467402 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:59:03.501266  467402 fix.go:56] duration metric: took 5.136062649s for fixHost
	I1025 10:59:03.501290  467402 start.go:83] releasing machines lock for "no-preload-093313", held for 5.136110772s
	I1025 10:59:03.501357  467402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-093313
	I1025 10:59:03.518695  467402 ssh_runner.go:195] Run: cat /version.json
	I1025 10:59:03.518744  467402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:59:03.518970  467402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:59:03.519168  467402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:59:03.551857  467402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:59:03.555971  467402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:59:03.771951  467402 ssh_runner.go:195] Run: systemctl --version
	I1025 10:59:03.778607  467402 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:59:03.826385  467402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:59:03.834988  467402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:59:03.835129  467402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:59:03.843449  467402 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:59:03.843548  467402 start.go:495] detecting cgroup driver to use...
	I1025 10:59:03.843637  467402 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:59:03.843726  467402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:59:03.861638  467402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:59:03.880375  467402 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:59:03.880488  467402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:59:03.897373  467402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:59:03.912411  467402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:59:04.067255  467402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:59:04.215001  467402 docker.go:234] disabling docker service ...
	I1025 10:59:04.215063  467402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:59:04.235976  467402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:59:04.251449  467402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:59:04.418591  467402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:59:04.604423  467402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:59:04.629907  467402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:59:04.647188  467402 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:59:04.647242  467402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:04.661522  467402 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:59:04.661590  467402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:04.676065  467402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:04.693727  467402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:04.705745  467402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:59:04.714700  467402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:04.730845  467402 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:04.742705  467402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:04.754664  467402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:59:04.763699  467402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:59:04.772638  467402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:59:04.958881  467402 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:59:05.132513  467402 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:59:05.132578  467402 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:59:05.136808  467402 start.go:563] Will wait 60s for crictl version
	I1025 10:59:05.136870  467402 ssh_runner.go:195] Run: which crictl
	I1025 10:59:05.140760  467402 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:59:05.180917  467402 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:59:05.181082  467402 ssh_runner.go:195] Run: crio --version
	I1025 10:59:05.217754  467402 ssh_runner.go:195] Run: crio --version
	I1025 10:59:05.253608  467402 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:59:04.175065  466783 kubeadm.go:883] updating cluster {Name:newest-cni-374679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-374679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:59:04.175214  466783 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:59:04.175292  466783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:59:04.228866  466783 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:59:04.228893  466783 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:59:04.228950  466783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:59:04.264584  466783 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:59:04.264604  466783 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:59:04.264611  466783 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1025 10:59:04.264718  466783 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-374679 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-374679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:59:04.264800  466783 ssh_runner.go:195] Run: crio config
	I1025 10:59:04.344713  466783 cni.go:84] Creating CNI manager for ""
	I1025 10:59:04.344783  466783 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:59:04.344822  466783 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1025 10:59:04.344872  466783 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-374679 NodeName:newest-cni-374679 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:59:04.345065  466783 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-374679"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:59:04.345159  466783 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:59:04.359606  466783 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:59:04.359714  466783 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:59:04.369008  466783 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 10:59:04.385590  466783 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:59:04.401580  466783 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1025 10:59:04.420122  466783 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:59:04.424798  466783 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:59:04.435492  466783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:59:04.570914  466783 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:59:04.588135  466783 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679 for IP: 192.168.76.2
	I1025 10:59:04.588199  466783 certs.go:195] generating shared ca certs ...
	I1025 10:59:04.588231  466783 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:04.588399  466783 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:59:04.588478  466783 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:59:04.588518  466783 certs.go:257] generating profile certs ...
	I1025 10:59:04.588684  466783 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/client.key
	I1025 10:59:04.588776  466783 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.key.de28dca6
	I1025 10:59:04.588863  466783 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/proxy-client.key
	I1025 10:59:04.589013  466783 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:59:04.589075  466783 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:59:04.589101  466783 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:59:04.589159  466783 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:59:04.589206  466783 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:59:04.589264  466783 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:59:04.589344  466783 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:59:04.590019  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:59:04.617425  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:59:04.647176  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:59:04.672281  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:59:04.704387  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 10:59:04.775091  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:59:04.810134  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:59:04.846113  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:59:04.885892  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:59:04.929029  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:59:04.955459  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:59:04.980305  466783 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:59:04.993891  466783 ssh_runner.go:195] Run: openssl version
	I1025 10:59:05.000999  466783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:59:05.017005  466783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:59:05.021401  466783 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:59:05.021558  466783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:59:05.070057  466783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:59:05.078712  466783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:59:05.088030  466783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:59:05.092790  466783 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:59:05.092909  466783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:59:05.145679  466783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:59:05.159030  466783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:59:05.169527  466783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:59:05.173732  466783 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:59:05.173798  466783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:59:05.217765  466783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:59:05.229670  466783 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:59:05.234495  466783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:59:05.283721  466783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:59:05.326579  466783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:59:05.398535  466783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:59:05.501606  466783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:59:05.594422  466783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:59:05.687918  466783 kubeadm.go:400] StartCluster: {Name:newest-cni-374679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-374679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:59:05.688025  466783 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:59:05.688109  466783 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:59:05.827573  466783 cri.go:89] found id: "8dd99f23a5130e7f746756316786e7365b2eac6f3b2500b3498d864236737f92"
	I1025 10:59:05.827593  466783 cri.go:89] found id: "ead41b389f560135dd1912a08ba529d0f7ff2d1d41c70eb5d5b61f81dd410d6d"
	I1025 10:59:05.827601  466783 cri.go:89] found id: "4fff872c680ae750b7165d91452f79ef43d35a25038ab06b1ebec4e7bdd2f138"
	I1025 10:59:05.827606  466783 cri.go:89] found id: "6385714248ed5135738e4519a9a7ba1b7a7684bb2deddf78459d3ce4a2c36c29"
	I1025 10:59:05.827609  466783 cri.go:89] found id: ""
	I1025 10:59:05.827674  466783 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:59:05.855776  466783 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:59:05Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:59:05.855861  466783 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:59:05.877433  466783 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:59:05.877449  466783 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:59:05.877501  466783 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:59:05.892914  466783 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:59:05.893401  466783 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-374679" does not appear in /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:59:05.893569  466783 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-259409/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-374679" cluster setting kubeconfig missing "newest-cni-374679" context setting]
	I1025 10:59:05.893952  466783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:05.895736  466783 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:59:05.915914  466783 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1025 10:59:05.915949  466783 kubeadm.go:601] duration metric: took 38.494212ms to restartPrimaryControlPlane
	I1025 10:59:05.915964  466783 kubeadm.go:402] duration metric: took 228.053061ms to StartCluster
	I1025 10:59:05.915993  466783 settings.go:142] acquiring lock: {Name:mk78071aea90144fb2e8f63b90de4792d7a3522a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:05.916057  466783 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:59:05.916712  466783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:05.916919  466783 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:59:05.917289  466783 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:59:05.917368  466783 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-374679"
	I1025 10:59:05.917382  466783 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-374679"
	W1025 10:59:05.917393  466783 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:59:05.917415  466783 host.go:66] Checking if "newest-cni-374679" exists ...
	I1025 10:59:05.917873  466783 cli_runner.go:164] Run: docker container inspect newest-cni-374679 --format={{.State.Status}}
	I1025 10:59:05.918315  466783 config.go:182] Loaded profile config "newest-cni-374679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:59:05.918388  466783 addons.go:69] Setting dashboard=true in profile "newest-cni-374679"
	I1025 10:59:05.918401  466783 addons.go:238] Setting addon dashboard=true in "newest-cni-374679"
	W1025 10:59:05.918418  466783 addons.go:247] addon dashboard should already be in state true
	I1025 10:59:05.918448  466783 host.go:66] Checking if "newest-cni-374679" exists ...
	I1025 10:59:05.918891  466783 cli_runner.go:164] Run: docker container inspect newest-cni-374679 --format={{.State.Status}}
	I1025 10:59:05.923189  466783 addons.go:69] Setting default-storageclass=true in profile "newest-cni-374679"
	I1025 10:59:05.923226  466783 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-374679"
	I1025 10:59:05.923272  466783 out.go:179] * Verifying Kubernetes components...
	I1025 10:59:05.923607  466783 cli_runner.go:164] Run: docker container inspect newest-cni-374679 --format={{.State.Status}}
	I1025 10:59:05.932153  466783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:59:05.966162  466783 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:59:05.969292  466783 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:59:05.969314  466783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:59:05.969382  466783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:59:05.988412  466783 addons.go:238] Setting addon default-storageclass=true in "newest-cni-374679"
	W1025 10:59:05.988434  466783 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:59:05.988458  466783 host.go:66] Checking if "newest-cni-374679" exists ...
	I1025 10:59:05.988873  466783 cli_runner.go:164] Run: docker container inspect newest-cni-374679 --format={{.State.Status}}
	I1025 10:59:05.994889  466783 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 10:59:05.998279  466783 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:59:06.002072  466783 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:59:06.002109  466783 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:59:06.002193  466783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:59:06.024422  466783 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:59:06.024445  466783 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:59:06.024516  466783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:59:06.073603  466783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:59:06.087460  466783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:59:06.095916  466783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:59:06.358783  466783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:59:06.415213  466783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:59:06.501678  466783 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:59:06.501699  466783 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:59:06.553435  466783 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:59:06.700672  466783 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:59:06.700697  466783 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:59:06.750571  466783 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:59:06.750600  466783 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:59:06.899299  466783 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:59:06.899323  466783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:59:06.953467  466783 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:59:06.953491  466783 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:59:05.256544  467402 cli_runner.go:164] Run: docker network inspect no-preload-093313 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:59:05.275674  467402 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 10:59:05.279982  467402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:59:05.290597  467402 kubeadm.go:883] updating cluster {Name:no-preload-093313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-093313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:59:05.290722  467402 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:59:05.290763  467402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:59:05.339756  467402 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:59:05.339832  467402 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:59:05.339855  467402 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1025 10:59:05.339995  467402 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-093313 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-093313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:59:05.340124  467402 ssh_runner.go:195] Run: crio config
	I1025 10:59:05.424900  467402 cni.go:84] Creating CNI manager for ""
	I1025 10:59:05.424973  467402 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:59:05.425012  467402 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:59:05.425073  467402 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-093313 NodeName:no-preload-093313 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:59:05.425254  467402 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-093313"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:59:05.425366  467402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:59:05.435170  467402 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:59:05.435293  467402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:59:05.444208  467402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 10:59:05.459318  467402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:59:05.474839  467402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1025 10:59:05.490424  467402 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:59:05.494551  467402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:59:05.505783  467402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:59:05.722927  467402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:59:05.753156  467402 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313 for IP: 192.168.85.2
	I1025 10:59:05.753233  467402 certs.go:195] generating shared ca certs ...
	I1025 10:59:05.753283  467402 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:05.753560  467402 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:59:05.753658  467402 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:59:05.753696  467402 certs.go:257] generating profile certs ...
	I1025 10:59:05.753850  467402 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.key
	I1025 10:59:05.754012  467402 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.key.bf0f12ad
	I1025 10:59:05.754114  467402 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/proxy-client.key
	I1025 10:59:05.754339  467402 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:59:05.754418  467402 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:59:05.754459  467402 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:59:05.754510  467402 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:59:05.754577  467402 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:59:05.754641  467402 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:59:05.754730  467402 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:59:05.755762  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:59:05.795751  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:59:05.836515  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:59:05.903658  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:59:05.971183  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 10:59:06.088297  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:59:06.151171  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:59:06.195948  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 10:59:06.269452  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:59:06.322502  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:59:06.354677  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:59:06.413161  467402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:59:06.443509  467402 ssh_runner.go:195] Run: openssl version
	I1025 10:59:06.459120  467402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:59:06.478275  467402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:59:06.485595  467402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:59:06.485692  467402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:59:06.558971  467402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:59:06.570113  467402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:59:06.583025  467402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:59:06.590729  467402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:59:06.590832  467402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:59:06.646944  467402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:59:06.657160  467402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:59:06.671319  467402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:59:06.676525  467402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:59:06.676668  467402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:59:06.751046  467402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:59:06.760032  467402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:59:06.766700  467402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:59:06.854657  467402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:59:06.968900  467402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:59:07.126629  467402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:59:07.278319  467402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:59:07.451115  467402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:59:07.596056  467402 kubeadm.go:400] StartCluster: {Name:no-preload-093313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-093313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:59:07.596203  467402 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:59:07.596293  467402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:59:07.737195  467402 cri.go:89] found id: "555a214631009b8c9e0ad146cf6605f03eec6b67635b74eb9d3950940eecf3f5"
	I1025 10:59:07.737295  467402 cri.go:89] found id: "3dd46cc93a4d340b21d6515927392c6d678062f1fd4a8eb33513a013a750df3f"
	I1025 10:59:07.737317  467402 cri.go:89] found id: "1abb0086bfd53a0a24fd6a972d03dfa536774e2a3214e984b2913d5d42eb1584"
	I1025 10:59:07.737342  467402 cri.go:89] found id: "2c3118fc8aba39e254ed98a90027a52eb3bc4eb55ca37aed37f0638d414d5a7c"
	I1025 10:59:07.737367  467402 cri.go:89] found id: ""
	I1025 10:59:07.737434  467402 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:59:07.775645  467402 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:59:07Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:59:07.775794  467402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:59:07.797946  467402 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:59:07.798022  467402 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:59:07.798094  467402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:59:07.816600  467402 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:59:07.817213  467402 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-093313" does not appear in /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:59:07.817498  467402 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-259409/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-093313" cluster setting kubeconfig missing "no-preload-093313" context setting]
	I1025 10:59:07.817960  467402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:07.819745  467402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:59:07.829523  467402 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1025 10:59:07.829592  467402 kubeadm.go:601] duration metric: took 31.549642ms to restartPrimaryControlPlane
	I1025 10:59:07.829617  467402 kubeadm.go:402] duration metric: took 233.56831ms to StartCluster
	I1025 10:59:07.829654  467402 settings.go:142] acquiring lock: {Name:mk78071aea90144fb2e8f63b90de4792d7a3522a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:07.829729  467402 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:59:07.830675  467402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:07.830929  467402 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:59:07.831336  467402 config.go:182] Loaded profile config "no-preload-093313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:59:07.831365  467402 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:59:07.831702  467402 addons.go:69] Setting storage-provisioner=true in profile "no-preload-093313"
	I1025 10:59:07.831711  467402 addons.go:69] Setting dashboard=true in profile "no-preload-093313"
	I1025 10:59:07.831731  467402 addons.go:238] Setting addon storage-provisioner=true in "no-preload-093313"
	I1025 10:59:07.831765  467402 addons.go:69] Setting default-storageclass=true in profile "no-preload-093313"
	I1025 10:59:07.831756  467402 addons.go:238] Setting addon dashboard=true in "no-preload-093313"
	W1025 10:59:07.831849  467402 addons.go:247] addon dashboard should already be in state true
	I1025 10:59:07.831902  467402 host.go:66] Checking if "no-preload-093313" exists ...
	W1025 10:59:07.831768  467402 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:59:07.831993  467402 host.go:66] Checking if "no-preload-093313" exists ...
	I1025 10:59:07.832545  467402 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Status}}
	I1025 10:59:07.832608  467402 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Status}}
	I1025 10:59:07.831848  467402 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-093313"
	I1025 10:59:07.833324  467402 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Status}}
	I1025 10:59:07.842270  467402 out.go:179] * Verifying Kubernetes components...
	I1025 10:59:07.845490  467402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:59:07.898047  467402 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:59:07.898133  467402 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:59:07.902159  467402 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:59:07.902194  467402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:59:07.902268  467402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:59:07.908276  467402 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 10:59:07.914059  467402 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:59:07.914088  467402 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:59:07.914164  467402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:59:07.915392  467402 addons.go:238] Setting addon default-storageclass=true in "no-preload-093313"
	W1025 10:59:07.915409  467402 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:59:07.915433  467402 host.go:66] Checking if "no-preload-093313" exists ...
	I1025 10:59:07.915837  467402 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Status}}
	I1025 10:59:07.957499  467402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:59:07.958412  467402 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:59:07.958434  467402 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:59:07.958496  467402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:59:07.986196  467402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:59:08.007780  467402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:59:07.015456  466783 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:59:07.015490  466783 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:59:07.059599  466783 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:59:07.059627  466783 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:59:07.086310  466783 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:59:07.086355  466783 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:59:07.115567  466783 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:59:07.115592  466783 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:59:07.164490  466783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:59:08.334144  467402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:59:08.438431  467402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:59:08.462506  467402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:59:08.502450  467402 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:59:08.502528  467402 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:59:08.736385  467402 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:59:08.736463  467402 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:59:08.799105  467402 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:59:08.799187  467402 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:59:08.847821  467402 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:59:08.847901  467402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:59:08.926621  467402 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:59:08.926714  467402 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:59:08.955393  467402 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:59:08.955470  467402 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:59:08.975800  467402 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:59:08.975879  467402 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:59:09.001926  467402 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:59:09.002086  467402 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:59:09.063297  467402 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:59:09.063379  467402 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:59:09.105571  467402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:59:17.007909  466783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.649056426s)
	I1025 10:59:17.007988  466783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.592754036s)
	I1025 10:59:17.008315  466783 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (10.454854762s)
	I1025 10:59:17.008340  466783 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:59:17.008394  466783 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:59:17.008508  466783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.843988694s)
	I1025 10:59:17.012013  466783 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-374679 addons enable metrics-server
	
	I1025 10:59:17.054252  466783 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1025 10:59:17.055941  466783 api_server.go:72] duration metric: took 11.138984373s to wait for apiserver process to appear ...
	I1025 10:59:17.055962  466783 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:59:17.055980  466783 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:59:17.057448  466783 addons.go:514] duration metric: took 11.140142529s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1025 10:59:17.070034  466783 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1025 10:59:17.071192  466783 api_server.go:141] control plane version: v1.34.1
	I1025 10:59:17.071254  466783 api_server.go:131] duration metric: took 15.285249ms to wait for apiserver health ...
	I1025 10:59:17.071279  466783 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:59:17.079670  466783 system_pods.go:59] 8 kube-system pods found
	I1025 10:59:17.079756  466783 system_pods.go:61] "coredns-66bc5c9577-4d24l" [5674f0d2-53d4-4f02-b91b-0e79c61b0c79] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 10:59:17.079782  466783 system_pods.go:61] "etcd-newest-cni-374679" [1492f4ab-00e0-4666-93a7-5426af263e77] Running
	I1025 10:59:17.079821  466783 system_pods.go:61] "kindnet-qtb6l" [4aad81e0-ec4e-4952-812a-459e61c41122] Running
	I1025 10:59:17.079848  466783 system_pods.go:61] "kube-apiserver-newest-cni-374679" [a8e63617-a996-48d7-8bd5-1d27197e9522] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:59:17.079872  466783 system_pods.go:61] "kube-controller-manager-newest-cni-374679" [542d0345-a119-4e95-83a0-97a347312be9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:59:17.079900  466783 system_pods.go:61] "kube-proxy-79b8c" [a627fd5d-c73d-44de-9703-44d8ec7f157c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 10:59:17.079931  466783 system_pods.go:61] "kube-scheduler-newest-cni-374679" [041edb3d-07d6-4a74-b89a-37d705bcafd4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:59:17.079960  466783 system_pods.go:61] "storage-provisioner" [f71da934-4c23-469c-b955-21feda9210a0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 10:59:17.079986  466783 system_pods.go:74] duration metric: took 8.685435ms to wait for pod list to return data ...
	I1025 10:59:17.080009  466783 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:59:17.086968  466783 default_sa.go:45] found service account: "default"
	I1025 10:59:17.087036  466783 default_sa.go:55] duration metric: took 6.997347ms for default service account to be created ...
	I1025 10:59:17.087064  466783 kubeadm.go:586] duration metric: took 11.170112136s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 10:59:17.087113  466783 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:59:17.090038  466783 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:59:17.090128  466783 node_conditions.go:123] node cpu capacity is 2
	I1025 10:59:17.090164  466783 node_conditions.go:105] duration metric: took 3.026136ms to run NodePressure ...
	I1025 10:59:17.090212  466783 start.go:241] waiting for startup goroutines ...
	I1025 10:59:17.090246  466783 start.go:246] waiting for cluster config update ...
	I1025 10:59:17.090299  466783 start.go:255] writing updated cluster config ...
	I1025 10:59:17.090686  466783 ssh_runner.go:195] Run: rm -f paused
	I1025 10:59:17.205579  466783 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:59:17.209055  466783 out.go:179] * Done! kubectl is now configured to use "newest-cni-374679" cluster and "default" namespace by default
	I1025 10:59:19.970663  467402 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.636487093s)
	I1025 10:59:19.970728  467402 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (11.53227663s)
	I1025 10:59:19.970754  467402 node_ready.go:35] waiting up to 6m0s for node "no-preload-093313" to be "Ready" ...
	I1025 10:59:19.971081  467402 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.5085004s)
	I1025 10:59:19.971347  467402 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.865695731s)
	I1025 10:59:19.974388  467402 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-093313 addons enable metrics-server
	
	I1025 10:59:19.993975  467402 node_ready.go:49] node "no-preload-093313" is "Ready"
	I1025 10:59:19.994025  467402 node_ready.go:38] duration metric: took 23.251286ms for node "no-preload-093313" to be "Ready" ...
	I1025 10:59:19.994039  467402 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:59:19.994113  467402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:59:20.006675  467402 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	
	
	==> CRI-O <==
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.422468302Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.427491272Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c516aa99-c727-41c2-a328-46fb04f9a989 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.445445665Z" level=info msg="Ran pod sandbox 98ea141582ade312cd93de0b0c82496dfcee38be5ee122ad0fb32e95e03617b9 with infra container: kube-system/kindnet-qtb6l/POD" id=c516aa99-c727-41c2-a328-46fb04f9a989 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.451537839Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f77a4d98-b4b5-49fb-bf26-9a5fd43ed0fe name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.475609227Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=7f354199-6b87-42ba-b7de-1858cc704be1 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.476936221Z" level=info msg="Creating container: kube-system/kindnet-qtb6l/kindnet-cni" id=9571aee9-20fb-4de8-aa6f-7cc08f2893ce name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.477059398Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.499676585Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.50024537Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.553700898Z" level=info msg="Created container 8be40753d70890608b2a489d9aa5f5bf7ab8c112142065d69f7262b75364d774: kube-system/kindnet-qtb6l/kindnet-cni" id=9571aee9-20fb-4de8-aa6f-7cc08f2893ce name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.554404191Z" level=info msg="Starting container: 8be40753d70890608b2a489d9aa5f5bf7ab8c112142065d69f7262b75364d774" id=36d55802-9685-47bc-b1ec-09e16f317992 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.561832918Z" level=info msg="Started container" PID=1053 containerID=8be40753d70890608b2a489d9aa5f5bf7ab8c112142065d69f7262b75364d774 description=kube-system/kindnet-qtb6l/kindnet-cni id=36d55802-9685-47bc-b1ec-09e16f317992 name=/runtime.v1.RuntimeService/StartContainer sandboxID=98ea141582ade312cd93de0b0c82496dfcee38be5ee122ad0fb32e95e03617b9
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.731541528Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-79b8c/POD" id=046b1fe8-a2fb-49d0-a345-2649f15d742a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.731611928Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.741717726Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=046b1fe8-a2fb-49d0-a345-2649f15d742a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.768472789Z" level=info msg="Ran pod sandbox cd8239a68ceff1a31a2ccf294a8ce8e48bd044d149cce0bd7ebb0b1edbcac3b8 with infra container: kube-system/kube-proxy-79b8c/POD" id=046b1fe8-a2fb-49d0-a345-2649f15d742a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.769941315Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=e1d771c0-39e7-4e68-a8fc-8229d58e2ee9 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.77430297Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=1bfdac95-144f-49b5-bb15-5bffaca3ac1b name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.782611213Z" level=info msg="Creating container: kube-system/kube-proxy-79b8c/kube-proxy" id=f41db880-c4f9-410a-a133-63c1759c620a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.782740174Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.80046739Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.806336513Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:59:16 newest-cni-374679 crio[611]: time="2025-10-25T10:59:16.171642351Z" level=info msg="Created container 8843eb30915a738f33777e68d8a34d76afae84adaf15a489d5e94aee3f640a5d: kube-system/kube-proxy-79b8c/kube-proxy" id=f41db880-c4f9-410a-a133-63c1759c620a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:59:16 newest-cni-374679 crio[611]: time="2025-10-25T10:59:16.186208627Z" level=info msg="Starting container: 8843eb30915a738f33777e68d8a34d76afae84adaf15a489d5e94aee3f640a5d" id=7e369a84-d816-4362-9325-13a2d2752e39 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:59:16 newest-cni-374679 crio[611]: time="2025-10-25T10:59:16.195549552Z" level=info msg="Started container" PID=1074 containerID=8843eb30915a738f33777e68d8a34d76afae84adaf15a489d5e94aee3f640a5d description=kube-system/kube-proxy-79b8c/kube-proxy id=7e369a84-d816-4362-9325-13a2d2752e39 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cd8239a68ceff1a31a2ccf294a8ce8e48bd044d149cce0bd7ebb0b1edbcac3b8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8843eb30915a7       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 seconds ago       Running             kube-proxy                1                   cd8239a68ceff       kube-proxy-79b8c                            kube-system
	8be40753d7089       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 seconds ago       Running             kindnet-cni               1                   98ea141582ade       kindnet-qtb6l                               kube-system
	8dd99f23a5130       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   16 seconds ago      Running             kube-scheduler            1                   fbce3391562a1       kube-scheduler-newest-cni-374679            kube-system
	ead41b389f560       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   16 seconds ago      Running             kube-controller-manager   1                   a1f367375dadb       kube-controller-manager-newest-cni-374679   kube-system
	4fff872c680ae       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   16 seconds ago      Running             kube-apiserver            1                   4034fd6c110bb       kube-apiserver-newest-cni-374679            kube-system
	6385714248ed5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   16 seconds ago      Running             etcd                      1                   4661bd55330aa       etcd-newest-cni-374679                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-374679
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-374679
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=newest-cni-374679
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_58_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:58:43 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-374679
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:59:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:59:14 +0000   Sat, 25 Oct 2025 10:58:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:59:14 +0000   Sat, 25 Oct 2025 10:58:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:59:14 +0000   Sat, 25 Oct 2025 10:58:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 25 Oct 2025 10:59:14 +0000   Sat, 25 Oct 2025 10:58:38 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-374679
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                913dee82-c4de-49b4-9575-60baba442e3d
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-374679                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         37s
	  kube-system                 kindnet-qtb6l                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-newest-cni-374679             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-newest-cni-374679    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-79b8c                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-newest-cni-374679             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Warning  CgroupV1                 44s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  44s (x8 over 44s)  kubelet          Node newest-cni-374679 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    44s (x8 over 44s)  kubelet          Node newest-cni-374679 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     44s (x8 over 44s)  kubelet          Node newest-cni-374679 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node newest-cni-374679 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 35s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node newest-cni-374679 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node newest-cni-374679 status is now: NodeHasSufficientPID
	  Normal   Starting                 35s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           31s                node-controller  Node newest-cni-374679 event: Registered Node newest-cni-374679 in Controller
	  Normal   Starting                 17s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  16s (x8 over 17s)  kubelet          Node newest-cni-374679 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16s (x8 over 17s)  kubelet          Node newest-cni-374679 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16s (x8 over 17s)  kubelet          Node newest-cni-374679 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-374679 event: Registered Node newest-cni-374679 in Controller
	
	
	==> dmesg <==
	[ +12.090113] overlayfs: idmapped layers are currently not supported
	[Oct25 10:37] overlayfs: idmapped layers are currently not supported
	[ +24.992751] overlayfs: idmapped layers are currently not supported
	[Oct25 10:38] overlayfs: idmapped layers are currently not supported
	[Oct25 10:39] overlayfs: idmapped layers are currently not supported
	[Oct25 10:40] overlayfs: idmapped layers are currently not supported
	[Oct25 10:41] overlayfs: idmapped layers are currently not supported
	[Oct25 10:43] overlayfs: idmapped layers are currently not supported
	[Oct25 10:45] overlayfs: idmapped layers are currently not supported
	[ +32.163680] overlayfs: idmapped layers are currently not supported
	[Oct25 10:47] overlayfs: idmapped layers are currently not supported
	[Oct25 10:49] overlayfs: idmapped layers are currently not supported
	[Oct25 10:50] overlayfs: idmapped layers are currently not supported
	[Oct25 10:51] overlayfs: idmapped layers are currently not supported
	[  +5.236078] overlayfs: idmapped layers are currently not supported
	[Oct25 10:52] overlayfs: idmapped layers are currently not supported
	[Oct25 10:53] overlayfs: idmapped layers are currently not supported
	[Oct25 10:54] overlayfs: idmapped layers are currently not supported
	[Oct25 10:55] overlayfs: idmapped layers are currently not supported
	[Oct25 10:56] overlayfs: idmapped layers are currently not supported
	[ +41.501413] overlayfs: idmapped layers are currently not supported
	[Oct25 10:57] overlayfs: idmapped layers are currently not supported
	[Oct25 10:58] overlayfs: idmapped layers are currently not supported
	[Oct25 10:59] overlayfs: idmapped layers are currently not supported
	[  +1.429017] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6385714248ed5135738e4519a9a7ba1b7a7684bb2deddf78459d3ce4a2c36c29] <==
	{"level":"warn","ts":"2025-10-25T10:59:10.432105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:10.442036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:10.472677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:10.503959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:10.544198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:10.578942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:10.638476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:10.719812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:10.858451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:10.924181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.042353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.106424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.194792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.230857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.317729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.402617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.403130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.421395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.494458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.527666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.637403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.668384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.706478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.737268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.833820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38578","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:59:21 up  2:41,  0 user,  load average: 5.48, 3.89, 3.14
	Linux newest-cni-374679 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8be40753d70890608b2a489d9aa5f5bf7ab8c112142065d69f7262b75364d774] <==
	I1025 10:59:15.720432       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:59:15.720649       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 10:59:15.720747       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:59:15.720763       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:59:15.720773       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:59:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:59:15.947754       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:59:15.947853       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:59:15.947890       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:59:15.948352       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [4fff872c680ae750b7165d91452f79ef43d35a25038ab06b1ebec4e7bdd2f138] <==
	I1025 10:59:14.110798       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 10:59:14.110805       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:59:14.130955       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:59:14.147770       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 10:59:14.154968       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 10:59:14.155338       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 10:59:14.155351       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 10:59:14.155447       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:59:14.181356       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 10:59:14.181387       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 10:59:14.182359       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 10:59:14.218605       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:59:14.220025       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1025 10:59:14.345860       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 10:59:14.554514       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:59:16.315426       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:59:16.493783       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:59:16.570744       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:59:16.598315       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:59:16.829479       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.49.119"}
	I1025 10:59:16.881612       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.170.133"}
	I1025 10:59:19.191612       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:59:19.226327       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:59:19.264770       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:59:19.328737       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [ead41b389f560135dd1912a08ba529d0f7ff2d1d41c70eb5d5b61f81dd410d6d] <==
	I1025 10:59:18.892309       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 10:59:18.892321       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 10:59:18.898144       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 10:59:18.900273       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 10:59:18.900301       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 10:59:18.900377       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:59:18.903942       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1025 10:59:18.918831       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 10:59:18.953202       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 10:59:18.953220       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 10:59:18.953225       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 10:59:18.953230       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 10:59:18.918858       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 10:59:18.918873       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 10:59:18.918886       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 10:59:18.960323       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 10:59:18.962289       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:59:18.967234       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:59:18.987126       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:59:18.988285       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 10:59:18.989451       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 10:59:19.112377       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:59:19.112403       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:59:19.112410       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:59:19.156955       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [8843eb30915a738f33777e68d8a34d76afae84adaf15a489d5e94aee3f640a5d] <==
	I1025 10:59:16.388133       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:59:16.759474       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:59:16.876484       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:59:16.878158       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 10:59:16.878286       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:59:16.916439       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:59:16.916553       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:59:16.931484       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:59:16.931770       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:59:16.931784       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:59:16.937737       1 config.go:200] "Starting service config controller"
	I1025 10:59:16.937752       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:59:16.937770       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:59:16.937773       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:59:16.937781       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:59:16.937785       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:59:16.941469       1 config.go:309] "Starting node config controller"
	I1025 10:59:16.941616       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:59:16.941652       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:59:17.041486       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:59:17.041531       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:59:17.041544       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8dd99f23a5130e7f746756316786e7365b2eac6f3b2500b3498d864236737f92] <==
	I1025 10:59:10.028836       1 serving.go:386] Generated self-signed cert in-memory
	W1025 10:59:13.628498       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 10:59:13.628616       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 10:59:13.628649       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 10:59:13.628695       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 10:59:13.959980       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:59:13.960011       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:59:13.968178       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:59:13.968275       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:59:13.968293       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:59:13.968313       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1025 10:59:14.141085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 10:59:14.141261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:59:14.141351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 10:59:14.144865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:59:14.145018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 10:59:14.190756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1025 10:59:14.270380       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: I1025 10:59:14.054439     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4aad81e0-ec4e-4952-812a-459e61c41122-lib-modules\") pod \"kindnet-qtb6l\" (UID: \"4aad81e0-ec4e-4952-812a-459e61c41122\") " pod="kube-system/kindnet-qtb6l"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: I1025 10:59:14.054496     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a627fd5d-c73d-44de-9703-44d8ec7f157c-xtables-lock\") pod \"kube-proxy-79b8c\" (UID: \"a627fd5d-c73d-44de-9703-44d8ec7f157c\") " pod="kube-system/kube-proxy-79b8c"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: I1025 10:59:14.054517     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a627fd5d-c73d-44de-9703-44d8ec7f157c-lib-modules\") pod \"kube-proxy-79b8c\" (UID: \"a627fd5d-c73d-44de-9703-44d8ec7f157c\") " pod="kube-system/kube-proxy-79b8c"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: I1025 10:59:14.054573     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4aad81e0-ec4e-4952-812a-459e61c41122-xtables-lock\") pod \"kindnet-qtb6l\" (UID: \"4aad81e0-ec4e-4952-812a-459e61c41122\") " pod="kube-system/kindnet-qtb6l"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: I1025 10:59:14.054608     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4aad81e0-ec4e-4952-812a-459e61c41122-cni-cfg\") pod \"kindnet-qtb6l\" (UID: \"4aad81e0-ec4e-4952-812a-459e61c41122\") " pod="kube-system/kindnet-qtb6l"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: E1025 10:59:14.130197     728 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-newest-cni-374679\" is forbidden: User \"system:node:newest-cni-374679\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-374679' and this object" podUID="fef477a738d804b4c1ff12466b8a71c9" pod="kube-system/kube-controller-manager-newest-cni-374679"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: I1025 10:59:14.322489     728 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-374679"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: I1025 10:59:14.322605     728 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-374679"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: I1025 10:59:14.322634     728 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: I1025 10:59:14.325021     728 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: E1025 10:59:14.381532     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-374679\" already exists" pod="kube-system/kube-controller-manager-newest-cni-374679"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: I1025 10:59:14.381566     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-374679"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: E1025 10:59:14.466265     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-374679\" already exists" pod="kube-system/kube-scheduler-newest-cni-374679"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: I1025 10:59:14.466301     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-374679"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: E1025 10:59:14.572929     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-374679\" already exists" pod="kube-system/etcd-newest-cni-374679"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: I1025 10:59:14.572967     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-374679"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: E1025 10:59:14.734550     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-374679\" already exists" pod="kube-system/kube-apiserver-newest-cni-374679"
	Oct 25 10:59:15 newest-cni-374679 kubelet[728]: E1025 10:59:15.056418     728 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Oct 25 10:59:15 newest-cni-374679 kubelet[728]: E1025 10:59:15.099993     728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a627fd5d-c73d-44de-9703-44d8ec7f157c-kube-proxy podName:a627fd5d-c73d-44de-9703-44d8ec7f157c nodeName:}" failed. No retries permitted until 2025-10-25 10:59:15.599943977 +0000 UTC m=+10.995152390 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/a627fd5d-c73d-44de-9703-44d8ec7f157c-kube-proxy") pod "kube-proxy-79b8c" (UID: "a627fd5d-c73d-44de-9703-44d8ec7f157c") : failed to sync configmap cache: timed out waiting for the condition
	Oct 25 10:59:15 newest-cni-374679 kubelet[728]: I1025 10:59:15.186257     728 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 25 10:59:15 newest-cni-374679 kubelet[728]: W1025 10:59:15.442565     728 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/132f6b53f3213a69fe3b488d3450493a65028241a62ba6d3a53b4d721e1e148d/crio-98ea141582ade312cd93de0b0c82496dfcee38be5ee122ad0fb32e95e03617b9 WatchSource:0}: Error finding container 98ea141582ade312cd93de0b0c82496dfcee38be5ee122ad0fb32e95e03617b9: Status 404 returned error can't find the container with id 98ea141582ade312cd93de0b0c82496dfcee38be5ee122ad0fb32e95e03617b9
	Oct 25 10:59:15 newest-cni-374679 kubelet[728]: W1025 10:59:15.752686     728 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/132f6b53f3213a69fe3b488d3450493a65028241a62ba6d3a53b4d721e1e148d/crio-cd8239a68ceff1a31a2ccf294a8ce8e48bd044d149cce0bd7ebb0b1edbcac3b8 WatchSource:0}: Error finding container cd8239a68ceff1a31a2ccf294a8ce8e48bd044d149cce0bd7ebb0b1edbcac3b8: Status 404 returned error can't find the container with id cd8239a68ceff1a31a2ccf294a8ce8e48bd044d149cce0bd7ebb0b1edbcac3b8
	Oct 25 10:59:18 newest-cni-374679 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:59:19 newest-cni-374679 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:59:19 newest-cni-374679 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-374679 -n newest-cni-374679
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-374679 -n newest-cni-374679: exit status 2 (367.085299ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-374679 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-4d24l storage-provisioner dashboard-metrics-scraper-6ffb444bf9-8cf2z kubernetes-dashboard-855c9754f9-lxfqz
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-374679 describe pod coredns-66bc5c9577-4d24l storage-provisioner dashboard-metrics-scraper-6ffb444bf9-8cf2z kubernetes-dashboard-855c9754f9-lxfqz
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-374679 describe pod coredns-66bc5c9577-4d24l storage-provisioner dashboard-metrics-scraper-6ffb444bf9-8cf2z kubernetes-dashboard-855c9754f9-lxfqz: exit status 1 (92.560199ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-4d24l" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-8cf2z" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-lxfqz" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-374679 describe pod coredns-66bc5c9577-4d24l storage-provisioner dashboard-metrics-scraper-6ffb444bf9-8cf2z kubernetes-dashboard-855c9754f9-lxfqz: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-374679
helpers_test.go:243: (dbg) docker inspect newest-cni-374679:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "132f6b53f3213a69fe3b488d3450493a65028241a62ba6d3a53b4d721e1e148d",
	        "Created": "2025-10-25T10:58:18.030527549Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 466908,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:58:57.24994384Z",
	            "FinishedAt": "2025-10-25T10:58:56.184333988Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/132f6b53f3213a69fe3b488d3450493a65028241a62ba6d3a53b4d721e1e148d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/132f6b53f3213a69fe3b488d3450493a65028241a62ba6d3a53b4d721e1e148d/hostname",
	        "HostsPath": "/var/lib/docker/containers/132f6b53f3213a69fe3b488d3450493a65028241a62ba6d3a53b4d721e1e148d/hosts",
	        "LogPath": "/var/lib/docker/containers/132f6b53f3213a69fe3b488d3450493a65028241a62ba6d3a53b4d721e1e148d/132f6b53f3213a69fe3b488d3450493a65028241a62ba6d3a53b4d721e1e148d-json.log",
	        "Name": "/newest-cni-374679",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-374679:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-374679",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "132f6b53f3213a69fe3b488d3450493a65028241a62ba6d3a53b4d721e1e148d",
	                "LowerDir": "/var/lib/docker/overlay2/1178d1621d712233b729c8b344a40f947ea6d8f0eb289643c9931db7c67e1eb5-init/diff:/var/lib/docker/overlay2/a746402fafa86965bd9394d930bb96ec16982037f9f5e7f4f1de6d09576ff851/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1178d1621d712233b729c8b344a40f947ea6d8f0eb289643c9931db7c67e1eb5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1178d1621d712233b729c8b344a40f947ea6d8f0eb289643c9931db7c67e1eb5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1178d1621d712233b729c8b344a40f947ea6d8f0eb289643c9931db7c67e1eb5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-374679",
	                "Source": "/var/lib/docker/volumes/newest-cni-374679/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-374679",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-374679",
	                "name.minikube.sigs.k8s.io": "newest-cni-374679",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "db94d63a5c2c2bbbd875be7ac9c0df3cc507c18ba5b8df549e4ce480965c2554",
	            "SandboxKey": "/var/run/docker/netns/db94d63a5c2c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-374679": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:d4:b2:2f:15:19",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "58611ffe5362d6a9d68586194cffae78efe127e4ab53288bcccd59ddf919e4bd",
	                    "EndpointID": "d3139fa5030c00ed625f15626e0d45265a00c113db4a2568f279cff03dee5ead",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-374679",
	                        "132f6b53f321"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-374679 -n newest-cni-374679
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-374679 -n newest-cni-374679: exit status 2 (390.324555ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-374679 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-374679 logs -n 25: (1.46350544s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-348342 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │                     │
	│ stop    │ -p embed-certs-348342 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-348342 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:56 UTC │
	│ start   │ -p embed-certs-348342 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:56 UTC │ 25 Oct 25 10:57 UTC │
	│ image   │ default-k8s-diff-port-223394 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ pause   │ -p default-k8s-diff-port-223394 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-223394                                                                                                                                                                                                               │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ delete  │ -p default-k8s-diff-port-223394                                                                                                                                                                                                               │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ delete  │ -p disable-driver-mounts-487220                                                                                                                                                                                                               │ disable-driver-mounts-487220 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ start   │ -p no-preload-093313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:58 UTC │
	│ image   │ embed-certs-348342 image list --format=json                                                                                                                                                                                                   │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ pause   │ -p embed-certs-348342 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │                     │
	│ delete  │ -p embed-certs-348342                                                                                                                                                                                                                         │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ delete  │ -p embed-certs-348342                                                                                                                                                                                                                         │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ start   │ -p newest-cni-374679 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ addons  │ enable metrics-server -p no-preload-093313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │                     │
	│ stop    │ -p no-preload-093313 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ addons  │ enable metrics-server -p newest-cni-374679 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │                     │
	│ stop    │ -p newest-cni-374679 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ addons  │ enable dashboard -p newest-cni-374679 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ start   │ -p newest-cni-374679 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:59 UTC │
	│ addons  │ enable dashboard -p no-preload-093313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ start   │ -p no-preload-093313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │                     │
	│ image   │ newest-cni-374679 image list --format=json                                                                                                                                                                                                    │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:59 UTC │ 25 Oct 25 10:59 UTC │
	│ pause   │ -p newest-cni-374679 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:58:58
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:58:58.146917  467402 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:58:58.147129  467402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:58:58.147155  467402 out.go:374] Setting ErrFile to fd 2...
	I1025 10:58:58.147176  467402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:58:58.147467  467402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:58:58.147901  467402 out.go:368] Setting JSON to false
	I1025 10:58:58.148805  467402 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9690,"bootTime":1761380249,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 10:58:58.148901  467402 start.go:141] virtualization:  
	I1025 10:58:58.151598  467402 out.go:179] * [no-preload-093313] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:58:58.155312  467402 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:58:58.155402  467402 notify.go:220] Checking for updates...
	I1025 10:58:58.161340  467402 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:58:58.164175  467402 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:58:58.167166  467402 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 10:58:58.170146  467402 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:58:58.173010  467402 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:58:58.176306  467402 config.go:182] Loaded profile config "no-preload-093313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:58:58.176900  467402 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:58:58.208577  467402 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:58:58.208703  467402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:58:58.268659  467402 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-25 10:58:58.259943051 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:58:58.268761  467402 docker.go:318] overlay module found
	I1025 10:58:58.271851  467402 out.go:179] * Using the docker driver based on existing profile
	I1025 10:58:58.274711  467402 start.go:305] selected driver: docker
	I1025 10:58:58.274732  467402 start.go:925] validating driver "docker" against &{Name:no-preload-093313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-093313 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:58:58.274828  467402 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:58:58.275539  467402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:58:58.331815  467402 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-25 10:58:58.322732407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:58:58.332164  467402 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:58:58.332197  467402 cni.go:84] Creating CNI manager for ""
	I1025 10:58:58.332261  467402 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:58:58.332302  467402 start.go:349] cluster config:
	{Name:no-preload-093313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-093313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:58:58.337401  467402 out.go:179] * Starting "no-preload-093313" primary control-plane node in "no-preload-093313" cluster
	I1025 10:58:58.340201  467402 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:58:58.343136  467402 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:58:58.346112  467402 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:58:58.346213  467402 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:58:58.346594  467402 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/config.json ...
	I1025 10:58:58.346897  467402 cache.go:107] acquiring lock: {Name:mke50a780b6f2fd20bf0f3807e5c55f2165bbc2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:58:58.346979  467402 cache.go:115] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 10:58:58.346988  467402 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 109.695µs
	I1025 10:58:58.347001  467402 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 10:58:58.347012  467402 cache.go:107] acquiring lock: {Name:mk6e894f2fc5a822328f2889957353638b611d87 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:58:58.347044  467402 cache.go:115] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1025 10:58:58.347049  467402 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 38.261µs
	I1025 10:58:58.347055  467402 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1025 10:58:58.347079  467402 cache.go:107] acquiring lock: {Name:mk31460a278f5ce669dba0a3edc67dec38888d3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:58:58.347111  467402 cache.go:115] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1025 10:58:58.347116  467402 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 38.572µs
	I1025 10:58:58.347121  467402 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1025 10:58:58.347130  467402 cache.go:107] acquiring lock: {Name:mk4eab06b911708d94fc84824aa5eaf12c5f728f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:58:58.347155  467402 cache.go:115] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1025 10:58:58.347160  467402 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 31.114µs
	I1025 10:58:58.347165  467402 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1025 10:58:58.347174  467402 cache.go:107] acquiring lock: {Name:mk0fabb771ebb58b343ccbfcf727bcc4ba36d3bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:58:58.347205  467402 cache.go:115] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1025 10:58:58.347210  467402 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 36.849µs
	I1025 10:58:58.347216  467402 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1025 10:58:58.347226  467402 cache.go:107] acquiring lock: {Name:mk9b73d996269c05e36f39d743e660929113e3bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:58:58.347256  467402 cache.go:115] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1025 10:58:58.347261  467402 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 36.89µs
	I1025 10:58:58.347267  467402 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1025 10:58:58.347279  467402 cache.go:107] acquiring lock: {Name:mka63f62ad185c4a0c57416430877cf896f4796b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:58:58.347307  467402 cache.go:115] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1025 10:58:58.347316  467402 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 33.757µs
	I1025 10:58:58.347322  467402 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1025 10:58:58.347331  467402 cache.go:107] acquiring lock: {Name:mk3432d572d15dfd7f5ddfb6ca632d44b3f5c29a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:58:58.347356  467402 cache.go:115] /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1025 10:58:58.347360  467402 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 31.188µs
	I1025 10:58:58.347366  467402 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1025 10:58:58.347371  467402 cache.go:87] Successfully saved all images to host disk.
	I1025 10:58:58.365050  467402 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:58:58.365069  467402 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:58:58.365083  467402 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:58:58.365115  467402 start.go:360] acquireMachinesLock for no-preload-093313: {Name:mk08df2ba22812bd327cf8f3a536e0d3054c6132 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:58:58.365170  467402 start.go:364] duration metric: took 38.606µs to acquireMachinesLock for "no-preload-093313"
	I1025 10:58:58.365190  467402 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:58:58.365195  467402 fix.go:54] fixHost starting: 
	I1025 10:58:58.365458  467402 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Status}}
	I1025 10:58:58.382727  467402 fix.go:112] recreateIfNeeded on no-preload-093313: state=Stopped err=<nil>
	W1025 10:58:58.382764  467402 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:58:57.217805  466783 out.go:252] * Restarting existing docker container for "newest-cni-374679" ...
	I1025 10:58:57.217915  466783 cli_runner.go:164] Run: docker start newest-cni-374679
	I1025 10:58:57.473037  466783 cli_runner.go:164] Run: docker container inspect newest-cni-374679 --format={{.State.Status}}
	I1025 10:58:57.508524  466783 kic.go:430] container "newest-cni-374679" state is running.
	I1025 10:58:57.508920  466783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-374679
	I1025 10:58:57.536177  466783 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/config.json ...
	I1025 10:58:57.536407  466783 machine.go:93] provisionDockerMachine start ...
	I1025 10:58:57.536468  466783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:58:57.561366  466783 main.go:141] libmachine: Using SSH client type: native
	I1025 10:58:57.561685  466783 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1025 10:58:57.561695  466783 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:58:57.563209  466783 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 10:59:00.713743  466783 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-374679
	
	I1025 10:59:00.713766  466783 ubuntu.go:182] provisioning hostname "newest-cni-374679"
	I1025 10:59:00.713841  466783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:59:00.731426  466783 main.go:141] libmachine: Using SSH client type: native
	I1025 10:59:00.731749  466783 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1025 10:59:00.731767  466783 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-374679 && echo "newest-cni-374679" | sudo tee /etc/hostname
	I1025 10:59:00.887202  466783 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-374679
	
	I1025 10:59:00.887274  466783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:59:00.904700  466783 main.go:141] libmachine: Using SSH client type: native
	I1025 10:59:00.905030  466783 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1025 10:59:00.905053  466783 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-374679' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-374679/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-374679' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:59:01.054552  466783 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:59:01.054582  466783 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:59:01.054611  466783 ubuntu.go:190] setting up certificates
	I1025 10:59:01.054620  466783 provision.go:84] configureAuth start
	I1025 10:59:01.054693  466783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-374679
	I1025 10:59:01.073064  466783 provision.go:143] copyHostCerts
	I1025 10:59:01.073134  466783 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:59:01.073148  466783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:59:01.073229  466783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:59:01.073339  466783 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:59:01.073349  466783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:59:01.073378  466783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:59:01.073491  466783 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:59:01.073503  466783 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:59:01.073529  466783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:59:01.073595  466783 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.newest-cni-374679 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-374679]
	I1025 10:59:01.651167  466783 provision.go:177] copyRemoteCerts
	I1025 10:59:01.651248  466783 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:59:01.651290  466783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:59:01.671864  466783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:59:01.782691  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:59:01.809247  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:59:01.834115  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:59:01.854945  466783 provision.go:87] duration metric: took 800.30212ms to configureAuth
	I1025 10:59:01.854976  466783 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:59:01.855174  466783 config.go:182] Loaded profile config "newest-cni-374679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:59:01.855296  466783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:59:01.873102  466783 main.go:141] libmachine: Using SSH client type: native
	I1025 10:59:01.873416  466783 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1025 10:59:01.873440  466783 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:58:58.386115  467402 out.go:252] * Restarting existing docker container for "no-preload-093313" ...
	I1025 10:58:58.386206  467402 cli_runner.go:164] Run: docker start no-preload-093313
	I1025 10:58:58.649901  467402 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Status}}
	I1025 10:58:58.671381  467402 kic.go:430] container "no-preload-093313" state is running.
	I1025 10:58:58.671765  467402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-093313
	I1025 10:58:58.697667  467402 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/config.json ...
	I1025 10:58:58.697898  467402 machine.go:93] provisionDockerMachine start ...
	I1025 10:58:58.697965  467402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:58:58.719794  467402 main.go:141] libmachine: Using SSH client type: native
	I1025 10:58:58.720103  467402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1025 10:58:58.720112  467402 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:58:58.720977  467402 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 10:59:01.895260  467402 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-093313
	
	I1025 10:59:01.895287  467402 ubuntu.go:182] provisioning hostname "no-preload-093313"
	I1025 10:59:01.895350  467402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:59:01.916836  467402 main.go:141] libmachine: Using SSH client type: native
	I1025 10:59:01.917152  467402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1025 10:59:01.917175  467402 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-093313 && echo "no-preload-093313" | sudo tee /etc/hostname
	I1025 10:59:02.094810  467402 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-093313
	
	I1025 10:59:02.094966  467402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:59:02.120787  467402 main.go:141] libmachine: Using SSH client type: native
	I1025 10:59:02.121124  467402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1025 10:59:02.121148  467402 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-093313' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-093313/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-093313' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:59:02.290339  467402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:59:02.290391  467402 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:59:02.290413  467402 ubuntu.go:190] setting up certificates
	I1025 10:59:02.290423  467402 provision.go:84] configureAuth start
	I1025 10:59:02.290500  467402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-093313
	I1025 10:59:02.313480  467402 provision.go:143] copyHostCerts
	I1025 10:59:02.313554  467402 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:59:02.313585  467402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:59:02.313656  467402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:59:02.313766  467402 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:59:02.313777  467402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:59:02.313800  467402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:59:02.313863  467402 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:59:02.313874  467402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:59:02.313895  467402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:59:02.314027  467402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.no-preload-093313 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-093313]
	I1025 10:59:02.600083  467402 provision.go:177] copyRemoteCerts
	I1025 10:59:02.600217  467402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:59:02.600310  467402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:59:02.622490  467402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:59:02.730539  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:59:02.750439  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:59:02.769214  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:59:02.789907  467402 provision.go:87] duration metric: took 499.461539ms to configureAuth
	I1025 10:59:02.789934  467402 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:59:02.790155  467402 config.go:182] Loaded profile config "no-preload-093313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:59:02.790269  467402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:59:02.809327  467402 main.go:141] libmachine: Using SSH client type: native
	I1025 10:59:02.809652  467402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1025 10:59:02.809676  467402 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:59:02.221351  466783 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:59:02.221375  466783 machine.go:96] duration metric: took 4.684958313s to provisionDockerMachine
	I1025 10:59:02.221386  466783 start.go:293] postStartSetup for "newest-cni-374679" (driver="docker")
	I1025 10:59:02.221397  466783 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:59:02.221481  466783 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:59:02.221535  466783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:59:02.242025  466783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:59:02.359700  466783 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:59:02.363946  466783 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:59:02.363975  466783 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:59:02.363986  466783 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:59:02.364048  466783 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:59:02.364141  466783 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:59:02.364245  466783 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:59:02.373182  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:59:02.391066  466783 start.go:296] duration metric: took 169.664409ms for postStartSetup
	I1025 10:59:02.391157  466783 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:59:02.391197  466783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:59:02.415658  466783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:59:02.523539  466783 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:59:02.529500  466783 fix.go:56] duration metric: took 5.335644302s for fixHost
	I1025 10:59:02.529522  466783 start.go:83] releasing machines lock for "newest-cni-374679", held for 5.335691695s
	I1025 10:59:02.529589  466783 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-374679
	I1025 10:59:02.552943  466783 ssh_runner.go:195] Run: cat /version.json
	I1025 10:59:02.552996  466783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:59:02.553272  466783 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:59:02.553319  466783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:59:02.575523  466783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:59:02.586707  466783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:59:02.698809  466783 ssh_runner.go:195] Run: systemctl --version
	I1025 10:59:02.789195  466783 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:59:02.840805  466783 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:59:02.846988  466783 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:59:02.847051  466783 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:59:02.856863  466783 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:59:02.856891  466783 start.go:495] detecting cgroup driver to use...
	I1025 10:59:02.856924  466783 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:59:02.856983  466783 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:59:02.884213  466783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:59:02.902607  466783 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:59:02.902700  466783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:59:02.921828  466783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:59:02.936311  466783 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:59:03.073680  466783 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:59:03.242011  466783 docker.go:234] disabling docker service ...
	I1025 10:59:03.242078  466783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:59:03.257649  466783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:59:03.272894  466783 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:59:03.420440  466783 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:59:03.558798  466783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:59:03.574511  466783 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:59:03.590902  466783 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:59:03.590975  466783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:03.600754  466783 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:59:03.600820  466783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:03.610621  466783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:03.621366  466783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:03.631980  466783 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:59:03.640971  466783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:03.651147  466783 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:03.661201  466783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:03.670847  466783 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:59:03.679701  466783 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:59:03.689569  466783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:59:03.818812  466783 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:59:03.977196  466783 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:59:03.977317  466783 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:59:03.986458  466783 start.go:563] Will wait 60s for crictl version
	I1025 10:59:03.986539  466783 ssh_runner.go:195] Run: which crictl
	I1025 10:59:03.991909  466783 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:59:04.024356  466783 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:59:04.024457  466783 ssh_runner.go:195] Run: crio --version
	I1025 10:59:04.069332  466783 ssh_runner.go:195] Run: crio --version
	I1025 10:59:04.123204  466783 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:59:04.126084  466783 cli_runner.go:164] Run: docker network inspect newest-cni-374679 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:59:04.154911  466783 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 10:59:04.159342  466783 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:59:04.172111  466783 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1025 10:59:03.188045  467402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:59:03.188070  467402 machine.go:96] duration metric: took 4.490162614s to provisionDockerMachine
	I1025 10:59:03.188082  467402 start.go:293] postStartSetup for "no-preload-093313" (driver="docker")
	I1025 10:59:03.188156  467402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:59:03.188225  467402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:59:03.188275  467402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:59:03.215558  467402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:59:03.326637  467402 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:59:03.330668  467402 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:59:03.330696  467402 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:59:03.330711  467402 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:59:03.330765  467402 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:59:03.330847  467402 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:59:03.330956  467402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:59:03.343193  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:59:03.369684  467402 start.go:296] duration metric: took 181.585985ms for postStartSetup
	I1025 10:59:03.369849  467402 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:59:03.369923  467402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:59:03.391648  467402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:59:03.495509  467402 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:59:03.501266  467402 fix.go:56] duration metric: took 5.136062649s for fixHost
	I1025 10:59:03.501290  467402 start.go:83] releasing machines lock for "no-preload-093313", held for 5.136110772s
	I1025 10:59:03.501357  467402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-093313
	I1025 10:59:03.518695  467402 ssh_runner.go:195] Run: cat /version.json
	I1025 10:59:03.518744  467402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:59:03.518970  467402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:59:03.519168  467402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:59:03.551857  467402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:59:03.555971  467402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:59:03.771951  467402 ssh_runner.go:195] Run: systemctl --version
	I1025 10:59:03.778607  467402 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:59:03.826385  467402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:59:03.834988  467402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:59:03.835129  467402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:59:03.843449  467402 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:59:03.843548  467402 start.go:495] detecting cgroup driver to use...
	I1025 10:59:03.843637  467402 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:59:03.843726  467402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:59:03.861638  467402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:59:03.880375  467402 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:59:03.880488  467402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:59:03.897373  467402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:59:03.912411  467402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:59:04.067255  467402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:59:04.215001  467402 docker.go:234] disabling docker service ...
	I1025 10:59:04.215063  467402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:59:04.235976  467402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:59:04.251449  467402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:59:04.418591  467402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:59:04.604423  467402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:59:04.629907  467402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:59:04.647188  467402 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:59:04.647242  467402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:04.661522  467402 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:59:04.661590  467402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:04.676065  467402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:04.693727  467402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:04.705745  467402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:59:04.714700  467402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:04.730845  467402 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:04.742705  467402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:04.754664  467402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:59:04.763699  467402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:59:04.772638  467402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:59:04.958881  467402 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:59:05.132513  467402 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:59:05.132578  467402 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:59:05.136808  467402 start.go:563] Will wait 60s for crictl version
	I1025 10:59:05.136870  467402 ssh_runner.go:195] Run: which crictl
	I1025 10:59:05.140760  467402 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:59:05.180917  467402 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:59:05.181082  467402 ssh_runner.go:195] Run: crio --version
	I1025 10:59:05.217754  467402 ssh_runner.go:195] Run: crio --version
	I1025 10:59:05.253608  467402 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:59:04.175065  466783 kubeadm.go:883] updating cluster {Name:newest-cni-374679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-374679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:59:04.175214  466783 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:59:04.175292  466783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:59:04.228866  466783 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:59:04.228893  466783 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:59:04.228950  466783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:59:04.264584  466783 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:59:04.264604  466783 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:59:04.264611  466783 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1025 10:59:04.264718  466783 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-374679 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-374679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:59:04.264800  466783 ssh_runner.go:195] Run: crio config
	I1025 10:59:04.344713  466783 cni.go:84] Creating CNI manager for ""
	I1025 10:59:04.344783  466783 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:59:04.344822  466783 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1025 10:59:04.344872  466783 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-374679 NodeName:newest-cni-374679 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:59:04.345065  466783 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-374679"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:59:04.345159  466783 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:59:04.359606  466783 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:59:04.359714  466783 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:59:04.369008  466783 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 10:59:04.385590  466783 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:59:04.401580  466783 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1025 10:59:04.420122  466783 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:59:04.424798  466783 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:59:04.435492  466783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:59:04.570914  466783 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:59:04.588135  466783 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679 for IP: 192.168.76.2
	I1025 10:59:04.588199  466783 certs.go:195] generating shared ca certs ...
	I1025 10:59:04.588231  466783 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:04.588399  466783 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:59:04.588478  466783 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:59:04.588518  466783 certs.go:257] generating profile certs ...
	I1025 10:59:04.588684  466783 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/client.key
	I1025 10:59:04.588776  466783 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.key.de28dca6
	I1025 10:59:04.588863  466783 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/proxy-client.key
	I1025 10:59:04.589013  466783 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:59:04.589075  466783 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:59:04.589101  466783 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:59:04.589159  466783 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:59:04.589206  466783 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:59:04.589264  466783 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:59:04.589344  466783 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:59:04.590019  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:59:04.617425  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:59:04.647176  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:59:04.672281  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:59:04.704387  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 10:59:04.775091  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:59:04.810134  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:59:04.846113  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/newest-cni-374679/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:59:04.885892  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:59:04.929029  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:59:04.955459  466783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:59:04.980305  466783 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:59:04.993891  466783 ssh_runner.go:195] Run: openssl version
	I1025 10:59:05.000999  466783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:59:05.017005  466783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:59:05.021401  466783 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:59:05.021558  466783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:59:05.070057  466783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:59:05.078712  466783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:59:05.088030  466783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:59:05.092790  466783 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:59:05.092909  466783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:59:05.145679  466783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:59:05.159030  466783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:59:05.169527  466783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:59:05.173732  466783 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:59:05.173798  466783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:59:05.217765  466783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:59:05.229670  466783 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:59:05.234495  466783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:59:05.283721  466783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:59:05.326579  466783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:59:05.398535  466783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:59:05.501606  466783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:59:05.594422  466783 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:59:05.687918  466783 kubeadm.go:400] StartCluster: {Name:newest-cni-374679 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-374679 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:59:05.688025  466783 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:59:05.688109  466783 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:59:05.827573  466783 cri.go:89] found id: "8dd99f23a5130e7f746756316786e7365b2eac6f3b2500b3498d864236737f92"
	I1025 10:59:05.827593  466783 cri.go:89] found id: "ead41b389f560135dd1912a08ba529d0f7ff2d1d41c70eb5d5b61f81dd410d6d"
	I1025 10:59:05.827601  466783 cri.go:89] found id: "4fff872c680ae750b7165d91452f79ef43d35a25038ab06b1ebec4e7bdd2f138"
	I1025 10:59:05.827606  466783 cri.go:89] found id: "6385714248ed5135738e4519a9a7ba1b7a7684bb2deddf78459d3ce4a2c36c29"
	I1025 10:59:05.827609  466783 cri.go:89] found id: ""
	I1025 10:59:05.827674  466783 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:59:05.855776  466783 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:59:05Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:59:05.855861  466783 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:59:05.877433  466783 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:59:05.877449  466783 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:59:05.877501  466783 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:59:05.892914  466783 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:59:05.893401  466783 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-374679" does not appear in /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:59:05.893569  466783 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-259409/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-374679" cluster setting kubeconfig missing "newest-cni-374679" context setting]
	I1025 10:59:05.893952  466783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:05.895736  466783 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:59:05.915914  466783 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1025 10:59:05.915949  466783 kubeadm.go:601] duration metric: took 38.494212ms to restartPrimaryControlPlane
	I1025 10:59:05.915964  466783 kubeadm.go:402] duration metric: took 228.053061ms to StartCluster
	I1025 10:59:05.915993  466783 settings.go:142] acquiring lock: {Name:mk78071aea90144fb2e8f63b90de4792d7a3522a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:05.916057  466783 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:59:05.916712  466783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:05.916919  466783 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:59:05.917289  466783 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:59:05.917368  466783 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-374679"
	I1025 10:59:05.917382  466783 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-374679"
	W1025 10:59:05.917393  466783 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:59:05.917415  466783 host.go:66] Checking if "newest-cni-374679" exists ...
	I1025 10:59:05.917873  466783 cli_runner.go:164] Run: docker container inspect newest-cni-374679 --format={{.State.Status}}
	I1025 10:59:05.918315  466783 config.go:182] Loaded profile config "newest-cni-374679": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:59:05.918388  466783 addons.go:69] Setting dashboard=true in profile "newest-cni-374679"
	I1025 10:59:05.918401  466783 addons.go:238] Setting addon dashboard=true in "newest-cni-374679"
	W1025 10:59:05.918418  466783 addons.go:247] addon dashboard should already be in state true
	I1025 10:59:05.918448  466783 host.go:66] Checking if "newest-cni-374679" exists ...
	I1025 10:59:05.918891  466783 cli_runner.go:164] Run: docker container inspect newest-cni-374679 --format={{.State.Status}}
	I1025 10:59:05.923189  466783 addons.go:69] Setting default-storageclass=true in profile "newest-cni-374679"
	I1025 10:59:05.923226  466783 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-374679"
	I1025 10:59:05.923272  466783 out.go:179] * Verifying Kubernetes components...
	I1025 10:59:05.923607  466783 cli_runner.go:164] Run: docker container inspect newest-cni-374679 --format={{.State.Status}}
	I1025 10:59:05.932153  466783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:59:05.966162  466783 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:59:05.969292  466783 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:59:05.969314  466783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:59:05.969382  466783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:59:05.988412  466783 addons.go:238] Setting addon default-storageclass=true in "newest-cni-374679"
	W1025 10:59:05.988434  466783 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:59:05.988458  466783 host.go:66] Checking if "newest-cni-374679" exists ...
	I1025 10:59:05.988873  466783 cli_runner.go:164] Run: docker container inspect newest-cni-374679 --format={{.State.Status}}
	I1025 10:59:05.994889  466783 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 10:59:05.998279  466783 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:59:06.002072  466783 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:59:06.002109  466783 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:59:06.002193  466783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:59:06.024422  466783 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:59:06.024445  466783 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:59:06.024516  466783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-374679
	I1025 10:59:06.073603  466783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:59:06.087460  466783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:59:06.095916  466783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/newest-cni-374679/id_rsa Username:docker}
	I1025 10:59:06.358783  466783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:59:06.415213  466783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:59:06.501678  466783 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:59:06.501699  466783 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:59:06.553435  466783 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:59:06.700672  466783 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:59:06.700697  466783 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:59:06.750571  466783 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:59:06.750600  466783 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:59:06.899299  466783 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:59:06.899323  466783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:59:06.953467  466783 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:59:06.953491  466783 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:59:05.256544  467402 cli_runner.go:164] Run: docker network inspect no-preload-093313 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:59:05.275674  467402 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 10:59:05.279982  467402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:59:05.290597  467402 kubeadm.go:883] updating cluster {Name:no-preload-093313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-093313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:59:05.290722  467402 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:59:05.290763  467402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:59:05.339756  467402 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:59:05.339832  467402 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:59:05.339855  467402 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1025 10:59:05.339995  467402 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-093313 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-093313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:59:05.340124  467402 ssh_runner.go:195] Run: crio config
	I1025 10:59:05.424900  467402 cni.go:84] Creating CNI manager for ""
	I1025 10:59:05.424973  467402 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:59:05.425012  467402 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:59:05.425073  467402 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-093313 NodeName:no-preload-093313 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:59:05.425254  467402 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-093313"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:59:05.425366  467402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:59:05.435170  467402 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:59:05.435293  467402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:59:05.444208  467402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 10:59:05.459318  467402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:59:05.474839  467402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1025 10:59:05.490424  467402 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:59:05.494551  467402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:59:05.505783  467402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:59:05.722927  467402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:59:05.753156  467402 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313 for IP: 192.168.85.2
	I1025 10:59:05.753233  467402 certs.go:195] generating shared ca certs ...
	I1025 10:59:05.753283  467402 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:05.753560  467402 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:59:05.753658  467402 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:59:05.753696  467402 certs.go:257] generating profile certs ...
	I1025 10:59:05.753850  467402 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.key
	I1025 10:59:05.754012  467402 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.key.bf0f12ad
	I1025 10:59:05.754114  467402 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/proxy-client.key
	I1025 10:59:05.754339  467402 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:59:05.754418  467402 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:59:05.754459  467402 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:59:05.754510  467402 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:59:05.754577  467402 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:59:05.754641  467402 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:59:05.754730  467402 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:59:05.755762  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:59:05.795751  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:59:05.836515  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:59:05.903658  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:59:05.971183  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 10:59:06.088297  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:59:06.151171  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:59:06.195948  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 10:59:06.269452  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:59:06.322502  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:59:06.354677  467402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:59:06.413161  467402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:59:06.443509  467402 ssh_runner.go:195] Run: openssl version
	I1025 10:59:06.459120  467402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:59:06.478275  467402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:59:06.485595  467402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:59:06.485692  467402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:59:06.558971  467402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:59:06.570113  467402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:59:06.583025  467402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:59:06.590729  467402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:59:06.590832  467402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:59:06.646944  467402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:59:06.657160  467402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:59:06.671319  467402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:59:06.676525  467402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:59:06.676668  467402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:59:06.751046  467402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:59:06.760032  467402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:59:06.766700  467402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:59:06.854657  467402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:59:06.968900  467402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:59:07.126629  467402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:59:07.278319  467402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:59:07.451115  467402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:59:07.596056  467402 kubeadm.go:400] StartCluster: {Name:no-preload-093313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-093313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:59:07.596203  467402 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:59:07.596293  467402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:59:07.737195  467402 cri.go:89] found id: "555a214631009b8c9e0ad146cf6605f03eec6b67635b74eb9d3950940eecf3f5"
	I1025 10:59:07.737295  467402 cri.go:89] found id: "3dd46cc93a4d340b21d6515927392c6d678062f1fd4a8eb33513a013a750df3f"
	I1025 10:59:07.737317  467402 cri.go:89] found id: "1abb0086bfd53a0a24fd6a972d03dfa536774e2a3214e984b2913d5d42eb1584"
	I1025 10:59:07.737342  467402 cri.go:89] found id: "2c3118fc8aba39e254ed98a90027a52eb3bc4eb55ca37aed37f0638d414d5a7c"
	I1025 10:59:07.737367  467402 cri.go:89] found id: ""
	I1025 10:59:07.737434  467402 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:59:07.775645  467402 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:59:07Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:59:07.775794  467402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:59:07.797946  467402 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:59:07.798022  467402 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:59:07.798094  467402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:59:07.816600  467402 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:59:07.817213  467402 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-093313" does not appear in /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:59:07.817498  467402 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-259409/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-093313" cluster setting kubeconfig missing "no-preload-093313" context setting]
	I1025 10:59:07.817960  467402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:07.819745  467402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:59:07.829523  467402 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1025 10:59:07.829592  467402 kubeadm.go:601] duration metric: took 31.549642ms to restartPrimaryControlPlane
	I1025 10:59:07.829617  467402 kubeadm.go:402] duration metric: took 233.56831ms to StartCluster
	I1025 10:59:07.829654  467402 settings.go:142] acquiring lock: {Name:mk78071aea90144fb2e8f63b90de4792d7a3522a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:07.829729  467402 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:59:07.830675  467402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:07.830929  467402 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:59:07.831336  467402 config.go:182] Loaded profile config "no-preload-093313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:59:07.831365  467402 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:59:07.831702  467402 addons.go:69] Setting storage-provisioner=true in profile "no-preload-093313"
	I1025 10:59:07.831711  467402 addons.go:69] Setting dashboard=true in profile "no-preload-093313"
	I1025 10:59:07.831731  467402 addons.go:238] Setting addon storage-provisioner=true in "no-preload-093313"
	I1025 10:59:07.831765  467402 addons.go:69] Setting default-storageclass=true in profile "no-preload-093313"
	I1025 10:59:07.831756  467402 addons.go:238] Setting addon dashboard=true in "no-preload-093313"
	W1025 10:59:07.831849  467402 addons.go:247] addon dashboard should already be in state true
	I1025 10:59:07.831902  467402 host.go:66] Checking if "no-preload-093313" exists ...
	W1025 10:59:07.831768  467402 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:59:07.831993  467402 host.go:66] Checking if "no-preload-093313" exists ...
	I1025 10:59:07.832545  467402 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Status}}
	I1025 10:59:07.832608  467402 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Status}}
	I1025 10:59:07.831848  467402 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-093313"
	I1025 10:59:07.833324  467402 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Status}}
	I1025 10:59:07.842270  467402 out.go:179] * Verifying Kubernetes components...
	I1025 10:59:07.845490  467402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:59:07.898047  467402 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:59:07.898133  467402 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:59:07.902159  467402 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:59:07.902194  467402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:59:07.902268  467402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:59:07.908276  467402 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 10:59:07.914059  467402 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:59:07.914088  467402 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:59:07.914164  467402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:59:07.915392  467402 addons.go:238] Setting addon default-storageclass=true in "no-preload-093313"
	W1025 10:59:07.915409  467402 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:59:07.915433  467402 host.go:66] Checking if "no-preload-093313" exists ...
	I1025 10:59:07.915837  467402 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Status}}
	I1025 10:59:07.957499  467402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:59:07.958412  467402 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:59:07.958434  467402 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:59:07.958496  467402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 10:59:07.986196  467402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:59:08.007780  467402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 10:59:07.015456  466783 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:59:07.015490  466783 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:59:07.059599  466783 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:59:07.059627  466783 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:59:07.086310  466783 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:59:07.086355  466783 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:59:07.115567  466783 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:59:07.115592  466783 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:59:07.164490  466783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:59:08.334144  467402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:59:08.438431  467402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:59:08.462506  467402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:59:08.502450  467402 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:59:08.502528  467402 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:59:08.736385  467402 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:59:08.736463  467402 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:59:08.799105  467402 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:59:08.799187  467402 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:59:08.847821  467402 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:59:08.847901  467402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:59:08.926621  467402 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:59:08.926714  467402 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:59:08.955393  467402 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:59:08.955470  467402 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:59:08.975800  467402 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:59:08.975879  467402 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:59:09.001926  467402 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:59:09.002086  467402 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:59:09.063297  467402 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:59:09.063379  467402 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:59:09.105571  467402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:59:17.007909  466783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.649056426s)
	I1025 10:59:17.007988  466783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.592754036s)
	I1025 10:59:17.008315  466783 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (10.454854762s)
	I1025 10:59:17.008340  466783 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:59:17.008394  466783 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:59:17.008508  466783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.843988694s)
	I1025 10:59:17.012013  466783 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-374679 addons enable metrics-server
	
	I1025 10:59:17.054252  466783 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1025 10:59:17.055941  466783 api_server.go:72] duration metric: took 11.138984373s to wait for apiserver process to appear ...
	I1025 10:59:17.055962  466783 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:59:17.055980  466783 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:59:17.057448  466783 addons.go:514] duration metric: took 11.140142529s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1025 10:59:17.070034  466783 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1025 10:59:17.071192  466783 api_server.go:141] control plane version: v1.34.1
	I1025 10:59:17.071254  466783 api_server.go:131] duration metric: took 15.285249ms to wait for apiserver health ...
	I1025 10:59:17.071279  466783 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:59:17.079670  466783 system_pods.go:59] 8 kube-system pods found
	I1025 10:59:17.079756  466783 system_pods.go:61] "coredns-66bc5c9577-4d24l" [5674f0d2-53d4-4f02-b91b-0e79c61b0c79] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 10:59:17.079782  466783 system_pods.go:61] "etcd-newest-cni-374679" [1492f4ab-00e0-4666-93a7-5426af263e77] Running
	I1025 10:59:17.079821  466783 system_pods.go:61] "kindnet-qtb6l" [4aad81e0-ec4e-4952-812a-459e61c41122] Running
	I1025 10:59:17.079848  466783 system_pods.go:61] "kube-apiserver-newest-cni-374679" [a8e63617-a996-48d7-8bd5-1d27197e9522] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:59:17.079872  466783 system_pods.go:61] "kube-controller-manager-newest-cni-374679" [542d0345-a119-4e95-83a0-97a347312be9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:59:17.079900  466783 system_pods.go:61] "kube-proxy-79b8c" [a627fd5d-c73d-44de-9703-44d8ec7f157c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 10:59:17.079931  466783 system_pods.go:61] "kube-scheduler-newest-cni-374679" [041edb3d-07d6-4a74-b89a-37d705bcafd4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:59:17.079960  466783 system_pods.go:61] "storage-provisioner" [f71da934-4c23-469c-b955-21feda9210a0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 10:59:17.079986  466783 system_pods.go:74] duration metric: took 8.685435ms to wait for pod list to return data ...
	I1025 10:59:17.080009  466783 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:59:17.086968  466783 default_sa.go:45] found service account: "default"
	I1025 10:59:17.087036  466783 default_sa.go:55] duration metric: took 6.997347ms for default service account to be created ...
	I1025 10:59:17.087064  466783 kubeadm.go:586] duration metric: took 11.170112136s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 10:59:17.087113  466783 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:59:17.090038  466783 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:59:17.090128  466783 node_conditions.go:123] node cpu capacity is 2
	I1025 10:59:17.090164  466783 node_conditions.go:105] duration metric: took 3.026136ms to run NodePressure ...
	I1025 10:59:17.090212  466783 start.go:241] waiting for startup goroutines ...
	I1025 10:59:17.090246  466783 start.go:246] waiting for cluster config update ...
	I1025 10:59:17.090299  466783 start.go:255] writing updated cluster config ...
	I1025 10:59:17.090686  466783 ssh_runner.go:195] Run: rm -f paused
	I1025 10:59:17.205579  466783 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:59:17.209055  466783 out.go:179] * Done! kubectl is now configured to use "newest-cni-374679" cluster and "default" namespace by default
	I1025 10:59:19.970663  467402 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.636487093s)
	I1025 10:59:19.970728  467402 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (11.53227663s)
	I1025 10:59:19.970754  467402 node_ready.go:35] waiting up to 6m0s for node "no-preload-093313" to be "Ready" ...
	I1025 10:59:19.971081  467402 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.5085004s)
	I1025 10:59:19.971347  467402 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.865695731s)
	I1025 10:59:19.974388  467402 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-093313 addons enable metrics-server
	
	I1025 10:59:19.993975  467402 node_ready.go:49] node "no-preload-093313" is "Ready"
	I1025 10:59:19.994025  467402 node_ready.go:38] duration metric: took 23.251286ms for node "no-preload-093313" to be "Ready" ...
	I1025 10:59:19.994039  467402 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:59:19.994113  467402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:59:20.006675  467402 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1025 10:59:20.010560  467402 addons.go:514] duration metric: took 12.179167464s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1025 10:59:20.013322  467402 api_server.go:72] duration metric: took 12.182323727s to wait for apiserver process to appear ...
	I1025 10:59:20.013402  467402 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:59:20.013439  467402 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 10:59:20.024514  467402 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 10:59:20.026120  467402 api_server.go:141] control plane version: v1.34.1
	I1025 10:59:20.026196  467402 api_server.go:131] duration metric: took 12.772086ms to wait for apiserver health ...
	I1025 10:59:20.026222  467402 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:59:20.031730  467402 system_pods.go:59] 8 kube-system pods found
	I1025 10:59:20.031817  467402 system_pods.go:61] "coredns-66bc5c9577-c56mp" [ee976d20-a036-4d38-ad57-a502bf3d0ff7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:59:20.031842  467402 system_pods.go:61] "etcd-no-preload-093313" [83fac023-8769-42f1-bb01-7b45b695a20f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:59:20.031874  467402 system_pods.go:61] "kindnet-6tbtt" [9b74e355-e50d-43f8-94b8-43fdbad27e8d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 10:59:20.031911  467402 system_pods.go:61] "kube-apiserver-no-preload-093313" [5b7a2f41-bfcc-4460-bc30-242d59d2cfa4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:59:20.031934  467402 system_pods.go:61] "kube-controller-manager-no-preload-093313" [890c70e0-54a4-4423-bbd0-245fbbae3273] Running
	I1025 10:59:20.031964  467402 system_pods.go:61] "kube-proxy-vlb79" [9d2476c6-edbd-4e9b-9d21-4a5547f3cdbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 10:59:20.031997  467402 system_pods.go:61] "kube-scheduler-no-preload-093313" [6376071f-6220-481e-b3ec-fed60fe4f008] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:59:20.032020  467402 system_pods.go:61] "storage-provisioner" [335dab10-1baa-4bca-afa1-0ccae3bddad5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:59:20.032043  467402 system_pods.go:74] duration metric: took 5.798813ms to wait for pod list to return data ...
	I1025 10:59:20.032074  467402 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:59:20.035734  467402 default_sa.go:45] found service account: "default"
	I1025 10:59:20.035817  467402 default_sa.go:55] duration metric: took 3.721806ms for default service account to be created ...
	I1025 10:59:20.035844  467402 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:59:20.042527  467402 system_pods.go:86] 8 kube-system pods found
	I1025 10:59:20.042618  467402 system_pods.go:89] "coredns-66bc5c9577-c56mp" [ee976d20-a036-4d38-ad57-a502bf3d0ff7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:59:20.042651  467402 system_pods.go:89] "etcd-no-preload-093313" [83fac023-8769-42f1-bb01-7b45b695a20f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:59:20.042681  467402 system_pods.go:89] "kindnet-6tbtt" [9b74e355-e50d-43f8-94b8-43fdbad27e8d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 10:59:20.042712  467402 system_pods.go:89] "kube-apiserver-no-preload-093313" [5b7a2f41-bfcc-4460-bc30-242d59d2cfa4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:59:20.042744  467402 system_pods.go:89] "kube-controller-manager-no-preload-093313" [890c70e0-54a4-4423-bbd0-245fbbae3273] Running
	I1025 10:59:20.042784  467402 system_pods.go:89] "kube-proxy-vlb79" [9d2476c6-edbd-4e9b-9d21-4a5547f3cdbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 10:59:20.042810  467402 system_pods.go:89] "kube-scheduler-no-preload-093313" [6376071f-6220-481e-b3ec-fed60fe4f008] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:59:20.042845  467402 system_pods.go:89] "storage-provisioner" [335dab10-1baa-4bca-afa1-0ccae3bddad5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:59:20.042872  467402 system_pods.go:126] duration metric: took 7.005437ms to wait for k8s-apps to be running ...
	I1025 10:59:20.042896  467402 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:59:20.042972  467402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:59:20.059587  467402 system_svc.go:56] duration metric: took 16.681479ms WaitForService to wait for kubelet
	I1025 10:59:20.059656  467402 kubeadm.go:586] duration metric: took 12.228663154s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:59:20.059692  467402 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:59:20.063163  467402 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:59:20.063262  467402 node_conditions.go:123] node cpu capacity is 2
	I1025 10:59:20.063290  467402 node_conditions.go:105] duration metric: took 3.577911ms to run NodePressure ...
	I1025 10:59:20.063315  467402 start.go:241] waiting for startup goroutines ...
	I1025 10:59:20.063341  467402 start.go:246] waiting for cluster config update ...
	I1025 10:59:20.063368  467402 start.go:255] writing updated cluster config ...
	I1025 10:59:20.063681  467402 ssh_runner.go:195] Run: rm -f paused
	I1025 10:59:20.068411  467402 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:59:20.072122  467402 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c56mp" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:59:22.081252  467402 pod_ready.go:104] pod "coredns-66bc5c9577-c56mp" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.422468302Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.427491272Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c516aa99-c727-41c2-a328-46fb04f9a989 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.445445665Z" level=info msg="Ran pod sandbox 98ea141582ade312cd93de0b0c82496dfcee38be5ee122ad0fb32e95e03617b9 with infra container: kube-system/kindnet-qtb6l/POD" id=c516aa99-c727-41c2-a328-46fb04f9a989 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.451537839Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f77a4d98-b4b5-49fb-bf26-9a5fd43ed0fe name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.475609227Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=7f354199-6b87-42ba-b7de-1858cc704be1 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.476936221Z" level=info msg="Creating container: kube-system/kindnet-qtb6l/kindnet-cni" id=9571aee9-20fb-4de8-aa6f-7cc08f2893ce name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.477059398Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.499676585Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.50024537Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.553700898Z" level=info msg="Created container 8be40753d70890608b2a489d9aa5f5bf7ab8c112142065d69f7262b75364d774: kube-system/kindnet-qtb6l/kindnet-cni" id=9571aee9-20fb-4de8-aa6f-7cc08f2893ce name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.554404191Z" level=info msg="Starting container: 8be40753d70890608b2a489d9aa5f5bf7ab8c112142065d69f7262b75364d774" id=36d55802-9685-47bc-b1ec-09e16f317992 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.561832918Z" level=info msg="Started container" PID=1053 containerID=8be40753d70890608b2a489d9aa5f5bf7ab8c112142065d69f7262b75364d774 description=kube-system/kindnet-qtb6l/kindnet-cni id=36d55802-9685-47bc-b1ec-09e16f317992 name=/runtime.v1.RuntimeService/StartContainer sandboxID=98ea141582ade312cd93de0b0c82496dfcee38be5ee122ad0fb32e95e03617b9
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.731541528Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-79b8c/POD" id=046b1fe8-a2fb-49d0-a345-2649f15d742a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.731611928Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.741717726Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=046b1fe8-a2fb-49d0-a345-2649f15d742a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.768472789Z" level=info msg="Ran pod sandbox cd8239a68ceff1a31a2ccf294a8ce8e48bd044d149cce0bd7ebb0b1edbcac3b8 with infra container: kube-system/kube-proxy-79b8c/POD" id=046b1fe8-a2fb-49d0-a345-2649f15d742a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.769941315Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=e1d771c0-39e7-4e68-a8fc-8229d58e2ee9 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.77430297Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=1bfdac95-144f-49b5-bb15-5bffaca3ac1b name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.782611213Z" level=info msg="Creating container: kube-system/kube-proxy-79b8c/kube-proxy" id=f41db880-c4f9-410a-a133-63c1759c620a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.782740174Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.80046739Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:59:15 newest-cni-374679 crio[611]: time="2025-10-25T10:59:15.806336513Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:59:16 newest-cni-374679 crio[611]: time="2025-10-25T10:59:16.171642351Z" level=info msg="Created container 8843eb30915a738f33777e68d8a34d76afae84adaf15a489d5e94aee3f640a5d: kube-system/kube-proxy-79b8c/kube-proxy" id=f41db880-c4f9-410a-a133-63c1759c620a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:59:16 newest-cni-374679 crio[611]: time="2025-10-25T10:59:16.186208627Z" level=info msg="Starting container: 8843eb30915a738f33777e68d8a34d76afae84adaf15a489d5e94aee3f640a5d" id=7e369a84-d816-4362-9325-13a2d2752e39 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:59:16 newest-cni-374679 crio[611]: time="2025-10-25T10:59:16.195549552Z" level=info msg="Started container" PID=1074 containerID=8843eb30915a738f33777e68d8a34d76afae84adaf15a489d5e94aee3f640a5d description=kube-system/kube-proxy-79b8c/kube-proxy id=7e369a84-d816-4362-9325-13a2d2752e39 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cd8239a68ceff1a31a2ccf294a8ce8e48bd044d149cce0bd7ebb0b1edbcac3b8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8843eb30915a7       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   8 seconds ago       Running             kube-proxy                1                   cd8239a68ceff       kube-proxy-79b8c                            kube-system
	8be40753d7089       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   8 seconds ago       Running             kindnet-cni               1                   98ea141582ade       kindnet-qtb6l                               kube-system
	8dd99f23a5130       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   18 seconds ago      Running             kube-scheduler            1                   fbce3391562a1       kube-scheduler-newest-cni-374679            kube-system
	ead41b389f560       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   18 seconds ago      Running             kube-controller-manager   1                   a1f367375dadb       kube-controller-manager-newest-cni-374679   kube-system
	4fff872c680ae       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   18 seconds ago      Running             kube-apiserver            1                   4034fd6c110bb       kube-apiserver-newest-cni-374679            kube-system
	6385714248ed5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   18 seconds ago      Running             etcd                      1                   4661bd55330aa       etcd-newest-cni-374679                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-374679
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-374679
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=newest-cni-374679
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_58_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:58:43 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-374679
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:59:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:59:14 +0000   Sat, 25 Oct 2025 10:58:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:59:14 +0000   Sat, 25 Oct 2025 10:58:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:59:14 +0000   Sat, 25 Oct 2025 10:58:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 25 Oct 2025 10:59:14 +0000   Sat, 25 Oct 2025 10:58:38 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-374679
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                913dee82-c4de-49b4-9575-60baba442e3d
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-374679                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         40s
	  kube-system                 kindnet-qtb6l                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      33s
	  kube-system                 kube-apiserver-newest-cni-374679             250m (12%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-newest-cni-374679    200m (10%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-79b8c                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-scheduler-newest-cni-374679             100m (5%)     0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 30s                kube-proxy       
	  Normal   Starting                 7s                 kube-proxy       
	  Warning  CgroupV1                 47s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node newest-cni-374679 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node newest-cni-374679 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     47s (x8 over 47s)  kubelet          Node newest-cni-374679 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  38s                kubelet          Node newest-cni-374679 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 38s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    38s                kubelet          Node newest-cni-374679 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     38s                kubelet          Node newest-cni-374679 status is now: NodeHasSufficientPID
	  Normal   Starting                 38s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           34s                node-controller  Node newest-cni-374679 event: Registered Node newest-cni-374679 in Controller
	  Normal   Starting                 20s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 20s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  19s (x8 over 20s)  kubelet          Node newest-cni-374679 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19s (x8 over 20s)  kubelet          Node newest-cni-374679 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19s (x8 over 20s)  kubelet          Node newest-cni-374679 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6s                 node-controller  Node newest-cni-374679 event: Registered Node newest-cni-374679 in Controller
	
	
	==> dmesg <==
	[ +12.090113] overlayfs: idmapped layers are currently not supported
	[Oct25 10:37] overlayfs: idmapped layers are currently not supported
	[ +24.992751] overlayfs: idmapped layers are currently not supported
	[Oct25 10:38] overlayfs: idmapped layers are currently not supported
	[Oct25 10:39] overlayfs: idmapped layers are currently not supported
	[Oct25 10:40] overlayfs: idmapped layers are currently not supported
	[Oct25 10:41] overlayfs: idmapped layers are currently not supported
	[Oct25 10:43] overlayfs: idmapped layers are currently not supported
	[Oct25 10:45] overlayfs: idmapped layers are currently not supported
	[ +32.163680] overlayfs: idmapped layers are currently not supported
	[Oct25 10:47] overlayfs: idmapped layers are currently not supported
	[Oct25 10:49] overlayfs: idmapped layers are currently not supported
	[Oct25 10:50] overlayfs: idmapped layers are currently not supported
	[Oct25 10:51] overlayfs: idmapped layers are currently not supported
	[  +5.236078] overlayfs: idmapped layers are currently not supported
	[Oct25 10:52] overlayfs: idmapped layers are currently not supported
	[Oct25 10:53] overlayfs: idmapped layers are currently not supported
	[Oct25 10:54] overlayfs: idmapped layers are currently not supported
	[Oct25 10:55] overlayfs: idmapped layers are currently not supported
	[Oct25 10:56] overlayfs: idmapped layers are currently not supported
	[ +41.501413] overlayfs: idmapped layers are currently not supported
	[Oct25 10:57] overlayfs: idmapped layers are currently not supported
	[Oct25 10:58] overlayfs: idmapped layers are currently not supported
	[Oct25 10:59] overlayfs: idmapped layers are currently not supported
	[  +1.429017] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6385714248ed5135738e4519a9a7ba1b7a7684bb2deddf78459d3ce4a2c36c29] <==
	{"level":"warn","ts":"2025-10-25T10:59:10.432105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:10.442036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:10.472677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:10.503959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:10.544198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:10.578942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:10.638476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:10.719812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:10.858451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:10.924181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.042353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.106424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.194792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.230857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.317729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.402617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.403130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.421395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.494458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.527666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.637403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.668384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.706478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.737268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:11.833820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38578","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:59:24 up  2:41,  0 user,  load average: 5.48, 3.89, 3.14
	Linux newest-cni-374679 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8be40753d70890608b2a489d9aa5f5bf7ab8c112142065d69f7262b75364d774] <==
	I1025 10:59:15.720432       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:59:15.720649       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 10:59:15.720747       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:59:15.720763       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:59:15.720773       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:59:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:59:15.947754       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:59:15.947853       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:59:15.947890       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:59:15.948352       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [4fff872c680ae750b7165d91452f79ef43d35a25038ab06b1ebec4e7bdd2f138] <==
	I1025 10:59:14.110798       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 10:59:14.110805       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:59:14.130955       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:59:14.147770       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 10:59:14.154968       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 10:59:14.155338       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 10:59:14.155351       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 10:59:14.155447       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:59:14.181356       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 10:59:14.181387       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 10:59:14.182359       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 10:59:14.218605       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:59:14.220025       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1025 10:59:14.345860       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 10:59:14.554514       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:59:16.315426       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:59:16.493783       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:59:16.570744       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:59:16.598315       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:59:16.829479       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.49.119"}
	I1025 10:59:16.881612       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.170.133"}
	I1025 10:59:19.191612       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:59:19.226327       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:59:19.264770       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:59:19.328737       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [ead41b389f560135dd1912a08ba529d0f7ff2d1d41c70eb5d5b61f81dd410d6d] <==
	I1025 10:59:18.892309       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 10:59:18.892321       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 10:59:18.898144       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 10:59:18.900273       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 10:59:18.900301       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 10:59:18.900377       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:59:18.903942       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1025 10:59:18.918831       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 10:59:18.953202       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 10:59:18.953220       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 10:59:18.953225       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 10:59:18.953230       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 10:59:18.918858       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 10:59:18.918873       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 10:59:18.918886       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 10:59:18.960323       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 10:59:18.962289       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:59:18.967234       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:59:18.987126       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:59:18.988285       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 10:59:18.989451       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 10:59:19.112377       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:59:19.112403       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:59:19.112410       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:59:19.156955       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [8843eb30915a738f33777e68d8a34d76afae84adaf15a489d5e94aee3f640a5d] <==
	I1025 10:59:16.388133       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:59:16.759474       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:59:16.876484       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:59:16.878158       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 10:59:16.878286       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:59:16.916439       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:59:16.916553       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:59:16.931484       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:59:16.931770       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:59:16.931784       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:59:16.937737       1 config.go:200] "Starting service config controller"
	I1025 10:59:16.937752       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:59:16.937770       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:59:16.937773       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:59:16.937781       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:59:16.937785       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:59:16.941469       1 config.go:309] "Starting node config controller"
	I1025 10:59:16.941616       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:59:16.941652       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:59:17.041486       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:59:17.041531       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:59:17.041544       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8dd99f23a5130e7f746756316786e7365b2eac6f3b2500b3498d864236737f92] <==
	I1025 10:59:10.028836       1 serving.go:386] Generated self-signed cert in-memory
	W1025 10:59:13.628498       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 10:59:13.628616       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 10:59:13.628649       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 10:59:13.628695       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 10:59:13.959980       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:59:13.960011       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:59:13.968178       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:59:13.968275       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:59:13.968293       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:59:13.968313       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1025 10:59:14.141085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 10:59:14.141261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:59:14.141351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 10:59:14.144865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:59:14.145018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 10:59:14.190756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1025 10:59:14.270380       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: I1025 10:59:14.054439     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4aad81e0-ec4e-4952-812a-459e61c41122-lib-modules\") pod \"kindnet-qtb6l\" (UID: \"4aad81e0-ec4e-4952-812a-459e61c41122\") " pod="kube-system/kindnet-qtb6l"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: I1025 10:59:14.054496     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a627fd5d-c73d-44de-9703-44d8ec7f157c-xtables-lock\") pod \"kube-proxy-79b8c\" (UID: \"a627fd5d-c73d-44de-9703-44d8ec7f157c\") " pod="kube-system/kube-proxy-79b8c"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: I1025 10:59:14.054517     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a627fd5d-c73d-44de-9703-44d8ec7f157c-lib-modules\") pod \"kube-proxy-79b8c\" (UID: \"a627fd5d-c73d-44de-9703-44d8ec7f157c\") " pod="kube-system/kube-proxy-79b8c"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: I1025 10:59:14.054573     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4aad81e0-ec4e-4952-812a-459e61c41122-xtables-lock\") pod \"kindnet-qtb6l\" (UID: \"4aad81e0-ec4e-4952-812a-459e61c41122\") " pod="kube-system/kindnet-qtb6l"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: I1025 10:59:14.054608     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4aad81e0-ec4e-4952-812a-459e61c41122-cni-cfg\") pod \"kindnet-qtb6l\" (UID: \"4aad81e0-ec4e-4952-812a-459e61c41122\") " pod="kube-system/kindnet-qtb6l"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: E1025 10:59:14.130197     728 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-newest-cni-374679\" is forbidden: User \"system:node:newest-cni-374679\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-374679' and this object" podUID="fef477a738d804b4c1ff12466b8a71c9" pod="kube-system/kube-controller-manager-newest-cni-374679"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: I1025 10:59:14.322489     728 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-374679"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: I1025 10:59:14.322605     728 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-374679"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: I1025 10:59:14.322634     728 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: I1025 10:59:14.325021     728 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: E1025 10:59:14.381532     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-374679\" already exists" pod="kube-system/kube-controller-manager-newest-cni-374679"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: I1025 10:59:14.381566     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-374679"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: E1025 10:59:14.466265     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-374679\" already exists" pod="kube-system/kube-scheduler-newest-cni-374679"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: I1025 10:59:14.466301     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-374679"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: E1025 10:59:14.572929     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-374679\" already exists" pod="kube-system/etcd-newest-cni-374679"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: I1025 10:59:14.572967     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-374679"
	Oct 25 10:59:14 newest-cni-374679 kubelet[728]: E1025 10:59:14.734550     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-374679\" already exists" pod="kube-system/kube-apiserver-newest-cni-374679"
	Oct 25 10:59:15 newest-cni-374679 kubelet[728]: E1025 10:59:15.056418     728 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Oct 25 10:59:15 newest-cni-374679 kubelet[728]: E1025 10:59:15.099993     728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a627fd5d-c73d-44de-9703-44d8ec7f157c-kube-proxy podName:a627fd5d-c73d-44de-9703-44d8ec7f157c nodeName:}" failed. No retries permitted until 2025-10-25 10:59:15.599943977 +0000 UTC m=+10.995152390 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/a627fd5d-c73d-44de-9703-44d8ec7f157c-kube-proxy") pod "kube-proxy-79b8c" (UID: "a627fd5d-c73d-44de-9703-44d8ec7f157c") : failed to sync configmap cache: timed out waiting for the condition
	Oct 25 10:59:15 newest-cni-374679 kubelet[728]: I1025 10:59:15.186257     728 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 25 10:59:15 newest-cni-374679 kubelet[728]: W1025 10:59:15.442565     728 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/132f6b53f3213a69fe3b488d3450493a65028241a62ba6d3a53b4d721e1e148d/crio-98ea141582ade312cd93de0b0c82496dfcee38be5ee122ad0fb32e95e03617b9 WatchSource:0}: Error finding container 98ea141582ade312cd93de0b0c82496dfcee38be5ee122ad0fb32e95e03617b9: Status 404 returned error can't find the container with id 98ea141582ade312cd93de0b0c82496dfcee38be5ee122ad0fb32e95e03617b9
	Oct 25 10:59:15 newest-cni-374679 kubelet[728]: W1025 10:59:15.752686     728 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/132f6b53f3213a69fe3b488d3450493a65028241a62ba6d3a53b4d721e1e148d/crio-cd8239a68ceff1a31a2ccf294a8ce8e48bd044d149cce0bd7ebb0b1edbcac3b8 WatchSource:0}: Error finding container cd8239a68ceff1a31a2ccf294a8ce8e48bd044d149cce0bd7ebb0b1edbcac3b8: Status 404 returned error can't find the container with id cd8239a68ceff1a31a2ccf294a8ce8e48bd044d149cce0bd7ebb0b1edbcac3b8
	Oct 25 10:59:18 newest-cni-374679 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:59:19 newest-cni-374679 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:59:19 newest-cni-374679 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-374679 -n newest-cni-374679
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-374679 -n newest-cni-374679: exit status 2 (647.496899ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-374679 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-4d24l storage-provisioner dashboard-metrics-scraper-6ffb444bf9-8cf2z kubernetes-dashboard-855c9754f9-lxfqz
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-374679 describe pod coredns-66bc5c9577-4d24l storage-provisioner dashboard-metrics-scraper-6ffb444bf9-8cf2z kubernetes-dashboard-855c9754f9-lxfqz
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-374679 describe pod coredns-66bc5c9577-4d24l storage-provisioner dashboard-metrics-scraper-6ffb444bf9-8cf2z kubernetes-dashboard-855c9754f9-lxfqz: exit status 1 (123.921079ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-4d24l" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-8cf2z" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-lxfqz" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-374679 describe pod coredns-66bc5c9577-4d24l storage-provisioner dashboard-metrics-scraper-6ffb444bf9-8cf2z kubernetes-dashboard-855c9754f9-lxfqz: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (7.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-093313 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-093313 --alsologtostderr -v=1: exit status 80 (1.904767285s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-093313 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 11:00:06.561121  474231 out.go:360] Setting OutFile to fd 1 ...
	I1025 11:00:06.561295  474231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 11:00:06.561307  474231 out.go:374] Setting ErrFile to fd 2...
	I1025 11:00:06.561313  474231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 11:00:06.561604  474231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 11:00:06.561876  474231 out.go:368] Setting JSON to false
	I1025 11:00:06.561903  474231 mustload.go:65] Loading cluster: no-preload-093313
	I1025 11:00:06.562441  474231 config.go:182] Loaded profile config "no-preload-093313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 11:00:06.563082  474231 cli_runner.go:164] Run: docker container inspect no-preload-093313 --format={{.State.Status}}
	I1025 11:00:06.583498  474231 host.go:66] Checking if "no-preload-093313" exists ...
	I1025 11:00:06.583856  474231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 11:00:06.656258  474231 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 11:00:06.645050548 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 11:00:06.656899  474231 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-093313 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 11:00:06.660489  474231 out.go:179] * Pausing node no-preload-093313 ... 
	I1025 11:00:06.663497  474231 host.go:66] Checking if "no-preload-093313" exists ...
	I1025 11:00:06.663867  474231 ssh_runner.go:195] Run: systemctl --version
	I1025 11:00:06.663924  474231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-093313
	I1025 11:00:06.681902  474231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/no-preload-093313/id_rsa Username:docker}
	I1025 11:00:06.793466  474231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 11:00:06.812976  474231 pause.go:52] kubelet running: true
	I1025 11:00:06.813059  474231 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 11:00:07.144122  474231 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 11:00:07.144257  474231 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 11:00:07.218784  474231 cri.go:89] found id: "35125532fae1d125369f6e7bbbd7c735a67cc2fa39af4d0a1b7697175a3ea7bf"
	I1025 11:00:07.218810  474231 cri.go:89] found id: "01259e3c29c7a48a1bfeb65d5897aef0275e5a400f418985c64b8bf48d14a17b"
	I1025 11:00:07.218816  474231 cri.go:89] found id: "3fba2d7bed036af45d7420433508986b6404a9b71238141c3e885894291bec15"
	I1025 11:00:07.218820  474231 cri.go:89] found id: "16d8aea6ffa68886212ab4f1d30e95a58ec710ae4e0e9bad811855e16be7b0b8"
	I1025 11:00:07.218824  474231 cri.go:89] found id: "f3cd0358e70fd7964d67639fe6cd07db37eb01990ca5aa0384c07de252a3dd21"
	I1025 11:00:07.218828  474231 cri.go:89] found id: "555a214631009b8c9e0ad146cf6605f03eec6b67635b74eb9d3950940eecf3f5"
	I1025 11:00:07.218831  474231 cri.go:89] found id: "3dd46cc93a4d340b21d6515927392c6d678062f1fd4a8eb33513a013a750df3f"
	I1025 11:00:07.218833  474231 cri.go:89] found id: "1abb0086bfd53a0a24fd6a972d03dfa536774e2a3214e984b2913d5d42eb1584"
	I1025 11:00:07.218836  474231 cri.go:89] found id: "2c3118fc8aba39e254ed98a90027a52eb3bc4eb55ca37aed37f0638d414d5a7c"
	I1025 11:00:07.218844  474231 cri.go:89] found id: "f1c50364f80a6c172e9ab40c700aecaed5832969281e3fafcc082a07d4f2ff68"
	I1025 11:00:07.218848  474231 cri.go:89] found id: "9f492d4ffcca1be504956535aafe86832a25e88e3dd3bb6655bcd469185729d8"
	I1025 11:00:07.218851  474231 cri.go:89] found id: ""
	I1025 11:00:07.218907  474231 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 11:00:07.230435  474231 retry.go:31] will retry after 363.08831ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T11:00:07Z" level=error msg="open /run/runc: no such file or directory"
	I1025 11:00:07.593683  474231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 11:00:07.607271  474231 pause.go:52] kubelet running: false
	I1025 11:00:07.607344  474231 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 11:00:07.791391  474231 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 11:00:07.791473  474231 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 11:00:07.868328  474231 cri.go:89] found id: "35125532fae1d125369f6e7bbbd7c735a67cc2fa39af4d0a1b7697175a3ea7bf"
	I1025 11:00:07.868355  474231 cri.go:89] found id: "01259e3c29c7a48a1bfeb65d5897aef0275e5a400f418985c64b8bf48d14a17b"
	I1025 11:00:07.868369  474231 cri.go:89] found id: "3fba2d7bed036af45d7420433508986b6404a9b71238141c3e885894291bec15"
	I1025 11:00:07.868373  474231 cri.go:89] found id: "16d8aea6ffa68886212ab4f1d30e95a58ec710ae4e0e9bad811855e16be7b0b8"
	I1025 11:00:07.868377  474231 cri.go:89] found id: "f3cd0358e70fd7964d67639fe6cd07db37eb01990ca5aa0384c07de252a3dd21"
	I1025 11:00:07.868381  474231 cri.go:89] found id: "555a214631009b8c9e0ad146cf6605f03eec6b67635b74eb9d3950940eecf3f5"
	I1025 11:00:07.868384  474231 cri.go:89] found id: "3dd46cc93a4d340b21d6515927392c6d678062f1fd4a8eb33513a013a750df3f"
	I1025 11:00:07.868394  474231 cri.go:89] found id: "1abb0086bfd53a0a24fd6a972d03dfa536774e2a3214e984b2913d5d42eb1584"
	I1025 11:00:07.868398  474231 cri.go:89] found id: "2c3118fc8aba39e254ed98a90027a52eb3bc4eb55ca37aed37f0638d414d5a7c"
	I1025 11:00:07.868406  474231 cri.go:89] found id: "f1c50364f80a6c172e9ab40c700aecaed5832969281e3fafcc082a07d4f2ff68"
	I1025 11:00:07.868410  474231 cri.go:89] found id: "9f492d4ffcca1be504956535aafe86832a25e88e3dd3bb6655bcd469185729d8"
	I1025 11:00:07.868413  474231 cri.go:89] found id: ""
	I1025 11:00:07.868469  474231 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 11:00:07.880606  474231 retry.go:31] will retry after 218.246049ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T11:00:07Z" level=error msg="open /run/runc: no such file or directory"
	I1025 11:00:08.099087  474231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 11:00:08.113174  474231 pause.go:52] kubelet running: false
	I1025 11:00:08.113278  474231 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 11:00:08.290051  474231 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 11:00:08.290161  474231 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 11:00:08.370039  474231 cri.go:89] found id: "35125532fae1d125369f6e7bbbd7c735a67cc2fa39af4d0a1b7697175a3ea7bf"
	I1025 11:00:08.370060  474231 cri.go:89] found id: "01259e3c29c7a48a1bfeb65d5897aef0275e5a400f418985c64b8bf48d14a17b"
	I1025 11:00:08.370065  474231 cri.go:89] found id: "3fba2d7bed036af45d7420433508986b6404a9b71238141c3e885894291bec15"
	I1025 11:00:08.370069  474231 cri.go:89] found id: "16d8aea6ffa68886212ab4f1d30e95a58ec710ae4e0e9bad811855e16be7b0b8"
	I1025 11:00:08.370073  474231 cri.go:89] found id: "f3cd0358e70fd7964d67639fe6cd07db37eb01990ca5aa0384c07de252a3dd21"
	I1025 11:00:08.370076  474231 cri.go:89] found id: "555a214631009b8c9e0ad146cf6605f03eec6b67635b74eb9d3950940eecf3f5"
	I1025 11:00:08.370080  474231 cri.go:89] found id: "3dd46cc93a4d340b21d6515927392c6d678062f1fd4a8eb33513a013a750df3f"
	I1025 11:00:08.370113  474231 cri.go:89] found id: "1abb0086bfd53a0a24fd6a972d03dfa536774e2a3214e984b2913d5d42eb1584"
	I1025 11:00:08.370124  474231 cri.go:89] found id: "2c3118fc8aba39e254ed98a90027a52eb3bc4eb55ca37aed37f0638d414d5a7c"
	I1025 11:00:08.370131  474231 cri.go:89] found id: "f1c50364f80a6c172e9ab40c700aecaed5832969281e3fafcc082a07d4f2ff68"
	I1025 11:00:08.370134  474231 cri.go:89] found id: "9f492d4ffcca1be504956535aafe86832a25e88e3dd3bb6655bcd469185729d8"
	I1025 11:00:08.370137  474231 cri.go:89] found id: ""
	I1025 11:00:08.370207  474231 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 11:00:08.391329  474231 out.go:203] 
	W1025 11:00:08.393567  474231 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T11:00:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T11:00:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 11:00:08.393596  474231 out.go:285] * 
	* 
	W1025 11:00:08.399594  474231 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 11:00:08.402550  474231 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-093313 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-093313
helpers_test.go:243: (dbg) docker inspect no-preload-093313:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b",
	        "Created": "2025-10-25T10:57:28.426935477Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 467529,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:58:58.41906842Z",
	            "FinishedAt": "2025-10-25T10:58:57.44640982Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b/hostname",
	        "HostsPath": "/var/lib/docker/containers/6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b/hosts",
	        "LogPath": "/var/lib/docker/containers/6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b/6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b-json.log",
	        "Name": "/no-preload-093313",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-093313:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-093313",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b",
	                "LowerDir": "/var/lib/docker/overlay2/1f5ea6e91c8f355c29623f9f36931296945f0bb9f9437babb5fc7356e43ab032-init/diff:/var/lib/docker/overlay2/a746402fafa86965bd9394d930bb96ec16982037f9f5e7f4f1de6d09576ff851/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f5ea6e91c8f355c29623f9f36931296945f0bb9f9437babb5fc7356e43ab032/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f5ea6e91c8f355c29623f9f36931296945f0bb9f9437babb5fc7356e43ab032/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f5ea6e91c8f355c29623f9f36931296945f0bb9f9437babb5fc7356e43ab032/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-093313",
	                "Source": "/var/lib/docker/volumes/no-preload-093313/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-093313",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-093313",
	                "name.minikube.sigs.k8s.io": "no-preload-093313",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1ec00c19784fdb83cf50276edf46ccec2031ca5ed419f03dd7df8e4283c8edd5",
	            "SandboxKey": "/var/run/docker/netns/1ec00c19784f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-093313": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:c5:4d:3d:82:09",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2d822b8f1fe897a1280d2399b042700d5489e4df686ead1ec0a23045fa9c8398",
	                    "EndpointID": "0137824d5156f9a5b827e9eb194e074ccee7f693ce82464e0a929d483a3b17fa",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-093313",
	                        "6e8e2d881e7d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-093313 -n no-preload-093313
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-093313 -n no-preload-093313: exit status 2 (423.237228ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-093313 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-093313 logs -n 25: (2.007924039s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p default-k8s-diff-port-223394 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-223394                                                                                                                                                                                                               │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ delete  │ -p default-k8s-diff-port-223394                                                                                                                                                                                                               │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ delete  │ -p disable-driver-mounts-487220                                                                                                                                                                                                               │ disable-driver-mounts-487220 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ start   │ -p no-preload-093313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:58 UTC │
	│ image   │ embed-certs-348342 image list --format=json                                                                                                                                                                                                   │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ pause   │ -p embed-certs-348342 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │                     │
	│ delete  │ -p embed-certs-348342                                                                                                                                                                                                                         │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ delete  │ -p embed-certs-348342                                                                                                                                                                                                                         │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ start   │ -p newest-cni-374679 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ addons  │ enable metrics-server -p no-preload-093313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │                     │
	│ stop    │ -p no-preload-093313 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ addons  │ enable metrics-server -p newest-cni-374679 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │                     │
	│ stop    │ -p newest-cni-374679 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ addons  │ enable dashboard -p newest-cni-374679 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ start   │ -p newest-cni-374679 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:59 UTC │
	│ addons  │ enable dashboard -p no-preload-093313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ start   │ -p no-preload-093313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:59 UTC │
	│ image   │ newest-cni-374679 image list --format=json                                                                                                                                                                                                    │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:59 UTC │ 25 Oct 25 10:59 UTC │
	│ pause   │ -p newest-cni-374679 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:59 UTC │                     │
	│ delete  │ -p newest-cni-374679                                                                                                                                                                                                                          │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:59 UTC │ 25 Oct 25 10:59 UTC │
	│ delete  │ -p newest-cni-374679                                                                                                                                                                                                                          │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:59 UTC │ 25 Oct 25 10:59 UTC │
	│ start   │ -p auto-759329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-759329                  │ jenkins │ v1.37.0 │ 25 Oct 25 10:59 UTC │                     │
	│ image   │ no-preload-093313 image list --format=json                                                                                                                                                                                                    │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 11:00 UTC │ 25 Oct 25 11:00 UTC │
	│ pause   │ -p no-preload-093313 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 11:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:59:28
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:59:28.676606  471804 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:59:28.676817  471804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:59:28.676844  471804 out.go:374] Setting ErrFile to fd 2...
	I1025 10:59:28.676863  471804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:59:28.677199  471804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:59:28.677795  471804 out.go:368] Setting JSON to false
	I1025 10:59:28.679052  471804 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9720,"bootTime":1761380249,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 10:59:28.679182  471804 start.go:141] virtualization:  
	I1025 10:59:28.686515  471804 out.go:179] * [auto-759329] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:59:28.690203  471804 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:59:28.690346  471804 notify.go:220] Checking for updates...
	I1025 10:59:28.696746  471804 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:59:28.700021  471804 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:59:28.703464  471804 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 10:59:28.706574  471804 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:59:28.709934  471804 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:59:28.713836  471804 config.go:182] Loaded profile config "no-preload-093313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:59:28.713931  471804 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:59:28.761175  471804 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:59:28.761316  471804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:59:28.861339  471804 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:59:28.849915244 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:59:28.861457  471804 docker.go:318] overlay module found
	I1025 10:59:28.865763  471804 out.go:179] * Using the docker driver based on user configuration
	I1025 10:59:28.868840  471804 start.go:305] selected driver: docker
	I1025 10:59:28.868864  471804 start.go:925] validating driver "docker" against <nil>
	I1025 10:59:28.868880  471804 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:59:28.869581  471804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:59:28.970167  471804 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:59:28.956406711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:59:28.970324  471804 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 10:59:28.970587  471804 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:59:28.974318  471804 out.go:179] * Using Docker driver with root privileges
	I1025 10:59:28.977632  471804 cni.go:84] Creating CNI manager for ""
	I1025 10:59:28.977710  471804 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:59:28.977724  471804 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:59:28.977959  471804 start.go:349] cluster config:
	{Name:auto-759329 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-759329 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1025 10:59:28.981743  471804 out.go:179] * Starting "auto-759329" primary control-plane node in "auto-759329" cluster
	I1025 10:59:28.985053  471804 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:59:28.988497  471804 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:59:28.991836  471804 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:59:28.991896  471804 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 10:59:28.991935  471804 cache.go:58] Caching tarball of preloaded images
	I1025 10:59:28.991997  471804 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:59:28.992257  471804 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:59:28.992274  471804 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:59:28.992391  471804 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/config.json ...
	I1025 10:59:28.992415  471804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/config.json: {Name:mk9a099f923bcbe085931afb7521cd2dae64de56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:29.017427  471804 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:59:29.017446  471804 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:59:29.017459  471804 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:59:29.017493  471804 start.go:360] acquireMachinesLock for auto-759329: {Name:mk57d9b9df7c393b7f55fadb9067894f3795e532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:59:29.017578  471804 start.go:364] duration metric: took 69.326µs to acquireMachinesLock for "auto-759329"
	I1025 10:59:29.017602  471804 start.go:93] Provisioning new machine with config: &{Name:auto-759329 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-759329 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:59:29.017679  471804 start.go:125] createHost starting for "" (driver="docker")
	W1025 10:59:28.583372  467402 pod_ready.go:104] pod "coredns-66bc5c9577-c56mp" is not "Ready", error: <nil>
	W1025 10:59:31.082608  467402 pod_ready.go:104] pod "coredns-66bc5c9577-c56mp" is not "Ready", error: <nil>
	I1025 10:59:29.021724  471804 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:59:29.022073  471804 start.go:159] libmachine.API.Create for "auto-759329" (driver="docker")
	I1025 10:59:29.022115  471804 client.go:168] LocalClient.Create starting
	I1025 10:59:29.022210  471804 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem
	I1025 10:59:29.022262  471804 main.go:141] libmachine: Decoding PEM data...
	I1025 10:59:29.022283  471804 main.go:141] libmachine: Parsing certificate...
	I1025 10:59:29.022362  471804 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem
	I1025 10:59:29.022386  471804 main.go:141] libmachine: Decoding PEM data...
	I1025 10:59:29.022397  471804 main.go:141] libmachine: Parsing certificate...
	I1025 10:59:29.022762  471804 cli_runner.go:164] Run: docker network inspect auto-759329 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:59:29.043711  471804 cli_runner.go:211] docker network inspect auto-759329 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:59:29.043809  471804 network_create.go:284] running [docker network inspect auto-759329] to gather additional debugging logs...
	I1025 10:59:29.043833  471804 cli_runner.go:164] Run: docker network inspect auto-759329
	W1025 10:59:29.062269  471804 cli_runner.go:211] docker network inspect auto-759329 returned with exit code 1
	I1025 10:59:29.062298  471804 network_create.go:287] error running [docker network inspect auto-759329]: docker network inspect auto-759329: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-759329 not found
	I1025 10:59:29.062312  471804 network_create.go:289] output of [docker network inspect auto-759329]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-759329 not found
	
	** /stderr **
	I1025 10:59:29.062427  471804 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:59:29.087921  471804 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2218a4d410c8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:56:a0:c3:54:c6:1f} reservation:<nil>}
	I1025 10:59:29.088290  471804 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-249eaf2d238d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:87:b9:4d:4c:0d} reservation:<nil>}
	I1025 10:59:29.088530  471804 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-210d4b236ff6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:d5:32:45:e6:85} reservation:<nil>}
	I1025 10:59:29.088943  471804 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a58720}
	I1025 10:59:29.088978  471804 network_create.go:124] attempt to create docker network auto-759329 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 10:59:29.089040  471804 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-759329 auto-759329
	I1025 10:59:29.171994  471804 network_create.go:108] docker network auto-759329 192.168.76.0/24 created
	I1025 10:59:29.172030  471804 kic.go:121] calculated static IP "192.168.76.2" for the "auto-759329" container
	I1025 10:59:29.172113  471804 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:59:29.207652  471804 cli_runner.go:164] Run: docker volume create auto-759329 --label name.minikube.sigs.k8s.io=auto-759329 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:59:29.235021  471804 oci.go:103] Successfully created a docker volume auto-759329
	I1025 10:59:29.235154  471804 cli_runner.go:164] Run: docker run --rm --name auto-759329-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-759329 --entrypoint /usr/bin/test -v auto-759329:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:59:30.292800  471804 cli_runner.go:217] Completed: docker run --rm --name auto-759329-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-759329 --entrypoint /usr/bin/test -v auto-759329:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (1.057595676s)
	I1025 10:59:30.292834  471804 oci.go:107] Successfully prepared a docker volume auto-759329
	I1025 10:59:30.292865  471804 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:59:30.292887  471804 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 10:59:30.292958  471804 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-759329:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1025 10:59:33.578403  467402 pod_ready.go:104] pod "coredns-66bc5c9577-c56mp" is not "Ready", error: <nil>
	W1025 10:59:35.578864  467402 pod_ready.go:104] pod "coredns-66bc5c9577-c56mp" is not "Ready", error: <nil>
	W1025 10:59:37.579496  467402 pod_ready.go:104] pod "coredns-66bc5c9577-c56mp" is not "Ready", error: <nil>
	I1025 10:59:36.181138  471804 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-759329:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.888138562s)
	I1025 10:59:36.181175  471804 kic.go:203] duration metric: took 5.88828418s to extract preloaded images to volume ...
	W1025 10:59:36.181304  471804 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 10:59:36.181421  471804 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 10:59:36.268629  471804 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-759329 --name auto-759329 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-759329 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-759329 --network auto-759329 --ip 192.168.76.2 --volume auto-759329:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 10:59:36.632424  471804 cli_runner.go:164] Run: docker container inspect auto-759329 --format={{.State.Running}}
	I1025 10:59:36.657883  471804 cli_runner.go:164] Run: docker container inspect auto-759329 --format={{.State.Status}}
	I1025 10:59:36.690373  471804 cli_runner.go:164] Run: docker exec auto-759329 stat /var/lib/dpkg/alternatives/iptables
	I1025 10:59:36.755819  471804 oci.go:144] the created container "auto-759329" has a running status.
	I1025 10:59:36.755862  471804 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/auto-759329/id_rsa...
	I1025 10:59:37.526650  471804 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-259409/.minikube/machines/auto-759329/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 10:59:37.546280  471804 cli_runner.go:164] Run: docker container inspect auto-759329 --format={{.State.Status}}
	I1025 10:59:37.564894  471804 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 10:59:37.564919  471804 kic_runner.go:114] Args: [docker exec --privileged auto-759329 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 10:59:37.617272  471804 cli_runner.go:164] Run: docker container inspect auto-759329 --format={{.State.Status}}
	I1025 10:59:37.637628  471804 machine.go:93] provisionDockerMachine start ...
	I1025 10:59:37.637738  471804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-759329
	I1025 10:59:37.666759  471804 main.go:141] libmachine: Using SSH client type: native
	I1025 10:59:37.668151  471804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1025 10:59:37.668168  471804 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:59:37.668809  471804 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53490->127.0.0.1:33458: read: connection reset by peer
	W1025 10:59:40.078353  467402 pod_ready.go:104] pod "coredns-66bc5c9577-c56mp" is not "Ready", error: <nil>
	W1025 10:59:42.089316  467402 pod_ready.go:104] pod "coredns-66bc5c9577-c56mp" is not "Ready", error: <nil>
	I1025 10:59:40.821463  471804 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-759329
	
	I1025 10:59:40.821486  471804 ubuntu.go:182] provisioning hostname "auto-759329"
	I1025 10:59:40.821547  471804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-759329
	I1025 10:59:40.842092  471804 main.go:141] libmachine: Using SSH client type: native
	I1025 10:59:40.842446  471804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1025 10:59:40.842463  471804 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-759329 && echo "auto-759329" | sudo tee /etc/hostname
	I1025 10:59:41.010622  471804 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-759329
	
	I1025 10:59:41.010733  471804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-759329
	I1025 10:59:41.036275  471804 main.go:141] libmachine: Using SSH client type: native
	I1025 10:59:41.036689  471804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1025 10:59:41.036734  471804 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-759329' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-759329/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-759329' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:59:41.201962  471804 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:59:41.202053  471804 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:59:41.202127  471804 ubuntu.go:190] setting up certificates
	I1025 10:59:41.202159  471804 provision.go:84] configureAuth start
	I1025 10:59:41.202263  471804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-759329
	I1025 10:59:41.220307  471804 provision.go:143] copyHostCerts
	I1025 10:59:41.220376  471804 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:59:41.220385  471804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:59:41.220466  471804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:59:41.220563  471804 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:59:41.220568  471804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:59:41.220599  471804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:59:41.220656  471804 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:59:41.220661  471804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:59:41.220685  471804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:59:41.220735  471804 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.auto-759329 san=[127.0.0.1 192.168.76.2 auto-759329 localhost minikube]
	I1025 10:59:41.640080  471804 provision.go:177] copyRemoteCerts
	I1025 10:59:41.640150  471804 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:59:41.640194  471804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-759329
	I1025 10:59:41.658689  471804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/auto-759329/id_rsa Username:docker}
	I1025 10:59:41.761886  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:59:41.782102  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1025 10:59:41.801568  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:59:41.819600  471804 provision.go:87] duration metric: took 617.401419ms to configureAuth
	I1025 10:59:41.819629  471804 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:59:41.819818  471804 config.go:182] Loaded profile config "auto-759329": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:59:41.819935  471804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-759329
	I1025 10:59:41.837197  471804 main.go:141] libmachine: Using SSH client type: native
	I1025 10:59:41.837565  471804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1025 10:59:41.837596  471804 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:59:42.204209  471804 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:59:42.204251  471804 machine.go:96] duration metric: took 4.566598868s to provisionDockerMachine
	I1025 10:59:42.204264  471804 client.go:171] duration metric: took 13.182136004s to LocalClient.Create
	I1025 10:59:42.204290  471804 start.go:167] duration metric: took 13.182219312s to libmachine.API.Create "auto-759329"
	I1025 10:59:42.204300  471804 start.go:293] postStartSetup for "auto-759329" (driver="docker")
	I1025 10:59:42.204311  471804 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:59:42.204414  471804 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:59:42.204466  471804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-759329
	I1025 10:59:42.232944  471804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/auto-759329/id_rsa Username:docker}
	I1025 10:59:42.343216  471804 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:59:42.347649  471804 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:59:42.347676  471804 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:59:42.347688  471804 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:59:42.347743  471804 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:59:42.347844  471804 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:59:42.347949  471804 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:59:42.358641  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:59:42.378520  471804 start.go:296] duration metric: took 174.202137ms for postStartSetup
	I1025 10:59:42.378968  471804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-759329
	I1025 10:59:42.396258  471804 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/config.json ...
	I1025 10:59:42.396546  471804 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:59:42.396601  471804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-759329
	I1025 10:59:42.420359  471804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/auto-759329/id_rsa Username:docker}
	I1025 10:59:42.528030  471804 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:59:42.532902  471804 start.go:128] duration metric: took 13.515206258s to createHost
	I1025 10:59:42.532927  471804 start.go:83] releasing machines lock for "auto-759329", held for 13.515339551s
	I1025 10:59:42.532998  471804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-759329
	I1025 10:59:42.549664  471804 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:59:42.549742  471804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-759329
	I1025 10:59:42.549664  471804 ssh_runner.go:195] Run: cat /version.json
	I1025 10:59:42.550026  471804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-759329
	I1025 10:59:42.582902  471804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/auto-759329/id_rsa Username:docker}
	I1025 10:59:42.586062  471804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/auto-759329/id_rsa Username:docker}
	I1025 10:59:42.689949  471804 ssh_runner.go:195] Run: systemctl --version
	I1025 10:59:42.803836  471804 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:59:42.854306  471804 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:59:42.859571  471804 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:59:42.859707  471804 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:59:42.890748  471804 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 10:59:42.890823  471804 start.go:495] detecting cgroup driver to use...
	I1025 10:59:42.890867  471804 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:59:42.890927  471804 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:59:42.908428  471804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:59:42.921439  471804 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:59:42.921504  471804 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:59:42.939271  471804 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:59:42.957630  471804 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:59:43.091505  471804 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:59:43.227750  471804 docker.go:234] disabling docker service ...
	I1025 10:59:43.227860  471804 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:59:43.249322  471804 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:59:43.262716  471804 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:59:43.387687  471804 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:59:43.516973  471804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:59:43.530451  471804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:59:43.545890  471804 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:59:43.546039  471804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:43.555962  471804 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:59:43.556084  471804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:43.565343  471804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:43.577446  471804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:43.587144  471804 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:59:43.595362  471804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:43.604657  471804 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:43.618609  471804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:43.628001  471804 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:59:43.635716  471804 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:59:43.643963  471804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:59:43.762457  471804 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:59:43.911735  471804 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:59:43.911869  471804 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:59:43.916710  471804 start.go:563] Will wait 60s for crictl version
	I1025 10:59:43.916823  471804 ssh_runner.go:195] Run: which crictl
	I1025 10:59:43.920421  471804 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:59:43.949360  471804 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:59:43.949450  471804 ssh_runner.go:195] Run: crio --version
	I1025 10:59:43.978054  471804 ssh_runner.go:195] Run: crio --version
	I1025 10:59:44.014942  471804 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:59:44.018086  471804 cli_runner.go:164] Run: docker network inspect auto-759329 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:59:44.038057  471804 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 10:59:44.042465  471804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:59:44.052906  471804 kubeadm.go:883] updating cluster {Name:auto-759329 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-759329 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:59:44.053018  471804 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:59:44.053088  471804 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:59:44.091703  471804 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:59:44.091728  471804 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:59:44.091789  471804 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:59:44.117273  471804 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:59:44.117297  471804 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:59:44.117305  471804 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1025 10:59:44.117399  471804 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-759329 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-759329 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:59:44.117481  471804 ssh_runner.go:195] Run: crio config
	I1025 10:59:44.179445  471804 cni.go:84] Creating CNI manager for ""
	I1025 10:59:44.179471  471804 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:59:44.179496  471804 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:59:44.179519  471804 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-759329 NodeName:auto-759329 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:59:44.179645  471804 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-759329"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:59:44.179722  471804 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:59:44.187770  471804 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:59:44.187850  471804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:59:44.198936  471804 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1025 10:59:44.211753  471804 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:59:44.224743  471804 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1025 10:59:44.237332  471804 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:59:44.240908  471804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:59:44.250874  471804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:59:44.375900  471804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:59:44.394650  471804 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329 for IP: 192.168.76.2
	I1025 10:59:44.394673  471804 certs.go:195] generating shared ca certs ...
	I1025 10:59:44.394693  471804 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:44.394834  471804 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:59:44.394907  471804 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:59:44.394924  471804 certs.go:257] generating profile certs ...
	I1025 10:59:44.394981  471804 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/client.key
	I1025 10:59:44.395011  471804 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/client.crt with IP's: []
	I1025 10:59:44.930305  471804 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/client.crt ...
	I1025 10:59:44.930345  471804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/client.crt: {Name:mk05b73ea04ab5ee129def3316d21b6b2b287e96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:44.930546  471804 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/client.key ...
	I1025 10:59:44.930560  471804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/client.key: {Name:mk877bd223d52053289e0941866975723f319f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:44.930655  471804 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/apiserver.key.b622d078
	I1025 10:59:44.930672  471804 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/apiserver.crt.b622d078 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1025 10:59:45.243268  471804 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/apiserver.crt.b622d078 ...
	I1025 10:59:45.243309  471804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/apiserver.crt.b622d078: {Name:mk1cf1e7061abee4b0dde352272cd5871264daee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:45.243516  471804 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/apiserver.key.b622d078 ...
	I1025 10:59:45.243534  471804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/apiserver.key.b622d078: {Name:mkb93fc7764be6596c71c130fc19807c2c97aeb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:45.243646  471804 certs.go:382] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/apiserver.crt.b622d078 -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/apiserver.crt
	I1025 10:59:45.243743  471804 certs.go:386] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/apiserver.key.b622d078 -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/apiserver.key
	I1025 10:59:45.243814  471804 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/proxy-client.key
	I1025 10:59:45.243838  471804 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/proxy-client.crt with IP's: []
	I1025 10:59:45.409870  471804 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/proxy-client.crt ...
	I1025 10:59:45.409911  471804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/proxy-client.crt: {Name:mk8a7c0eb7aaf9e771bc256829f0e53ad7cafb6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:45.410144  471804 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/proxy-client.key ...
	I1025 10:59:45.410166  471804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/proxy-client.key: {Name:mk8cc6dbaf10265fb28100f3ba5e820344cb5629 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:45.410426  471804 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:59:45.410479  471804 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:59:45.410493  471804 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:59:45.410576  471804 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:59:45.410614  471804 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:59:45.410645  471804 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:59:45.410727  471804 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:59:45.411379  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:59:45.439221  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:59:45.469081  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:59:45.494150  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:59:45.516419  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1025 10:59:45.536697  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:59:45.556632  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:59:45.578755  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:59:45.598649  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:59:45.617537  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:59:45.636243  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:59:45.655850  471804 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:59:45.669094  471804 ssh_runner.go:195] Run: openssl version
	I1025 10:59:45.676921  471804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:59:45.686023  471804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:59:45.690127  471804 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:59:45.690261  471804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:59:45.732415  471804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:59:45.741184  471804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:59:45.749837  471804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:59:45.753790  471804 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:59:45.753905  471804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:59:45.797196  471804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:59:45.806171  471804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:59:45.815152  471804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:59:45.819393  471804 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:59:45.819467  471804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:59:45.862149  471804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:59:45.871520  471804 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:59:45.875419  471804 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 10:59:45.875518  471804 kubeadm.go:400] StartCluster: {Name:auto-759329 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-759329 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:59:45.875609  471804 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:59:45.875677  471804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:59:45.916979  471804 cri.go:89] found id: ""
	I1025 10:59:45.917068  471804 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:59:45.930835  471804 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:59:45.939638  471804 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 10:59:45.939705  471804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:59:45.950633  471804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 10:59:45.950656  471804 kubeadm.go:157] found existing configuration files:
	
	I1025 10:59:45.950719  471804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 10:59:45.962928  471804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 10:59:45.963024  471804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:59:45.971078  471804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 10:59:45.979315  471804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 10:59:45.979405  471804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:59:45.987366  471804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 10:59:45.995603  471804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 10:59:45.995671  471804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:59:46.007338  471804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 10:59:46.017036  471804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 10:59:46.017166  471804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:59:46.036609  471804 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 10:59:46.093914  471804 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 10:59:46.094055  471804 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:59:46.131648  471804 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 10:59:46.131808  471804 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 10:59:46.131881  471804 kubeadm.go:318] OS: Linux
	I1025 10:59:46.131955  471804 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 10:59:46.132036  471804 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 10:59:46.132121  471804 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 10:59:46.132199  471804 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 10:59:46.132282  471804 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 10:59:46.132358  471804 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 10:59:46.132437  471804 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 10:59:46.132516  471804 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 10:59:46.132598  471804 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 10:59:46.228002  471804 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:59:46.228151  471804 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:59:46.228261  471804 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 10:59:46.235949  471804 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1025 10:59:44.578546  467402 pod_ready.go:104] pod "coredns-66bc5c9577-c56mp" is not "Ready", error: <nil>
	W1025 10:59:46.580788  467402 pod_ready.go:104] pod "coredns-66bc5c9577-c56mp" is not "Ready", error: <nil>
	I1025 10:59:46.242699  471804 out.go:252]   - Generating certificates and keys ...
	I1025 10:59:46.242886  471804 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 10:59:46.242972  471804 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 10:59:46.719270  471804 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 10:59:47.317663  471804 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 10:59:48.018256  471804 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 10:59:48.090859  471804 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 10:59:48.662525  471804 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:59:48.662890  471804 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-759329 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	W1025 10:59:48.582733  467402 pod_ready.go:104] pod "coredns-66bc5c9577-c56mp" is not "Ready", error: <nil>
	W1025 10:59:51.079883  467402 pod_ready.go:104] pod "coredns-66bc5c9577-c56mp" is not "Ready", error: <nil>
	I1025 10:59:52.085562  467402 pod_ready.go:94] pod "coredns-66bc5c9577-c56mp" is "Ready"
	I1025 10:59:52.085594  467402 pod_ready.go:86] duration metric: took 32.013390888s for pod "coredns-66bc5c9577-c56mp" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:59:52.088484  467402 pod_ready.go:83] waiting for pod "etcd-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:59:52.093125  467402 pod_ready.go:94] pod "etcd-no-preload-093313" is "Ready"
	I1025 10:59:52.093154  467402 pod_ready.go:86] duration metric: took 4.646663ms for pod "etcd-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:59:52.095578  467402 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:59:52.100114  467402 pod_ready.go:94] pod "kube-apiserver-no-preload-093313" is "Ready"
	I1025 10:59:52.100138  467402 pod_ready.go:86] duration metric: took 4.527294ms for pod "kube-apiserver-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:59:52.102614  467402 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:59:52.276623  467402 pod_ready.go:94] pod "kube-controller-manager-no-preload-093313" is "Ready"
	I1025 10:59:52.276657  467402 pod_ready.go:86] duration metric: took 174.006426ms for pod "kube-controller-manager-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:59:52.477129  467402 pod_ready.go:83] waiting for pod "kube-proxy-vlb79" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:59:52.877080  467402 pod_ready.go:94] pod "kube-proxy-vlb79" is "Ready"
	I1025 10:59:52.877113  467402 pod_ready.go:86] duration metric: took 399.952016ms for pod "kube-proxy-vlb79" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:59:53.076468  467402 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:59:53.476980  467402 pod_ready.go:94] pod "kube-scheduler-no-preload-093313" is "Ready"
	I1025 10:59:53.477019  467402 pod_ready.go:86] duration metric: took 400.521359ms for pod "kube-scheduler-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:59:53.477033  467402 pod_ready.go:40] duration metric: took 33.408544557s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:59:53.563381  467402 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:59:53.566681  467402 out.go:179] * Done! kubectl is now configured to use "no-preload-093313" cluster and "default" namespace by default
	I1025 10:59:49.253932  471804 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:59:49.254342  471804 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-759329 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 10:59:49.648303  471804 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:59:50.160750  471804 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 10:59:51.249665  471804 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 10:59:51.250151  471804 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 10:59:51.376997  471804 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 10:59:51.943528  471804 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 10:59:52.970078  471804 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 10:59:53.345480  471804 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 10:59:53.909105  471804 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 10:59:53.910618  471804 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 10:59:53.919580  471804 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 10:59:53.928504  471804 out.go:252]   - Booting up control plane ...
	I1025 10:59:53.928614  471804 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 10:59:53.928697  471804 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 10:59:53.929933  471804 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 10:59:53.953115  471804 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 10:59:53.953222  471804 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 10:59:53.962334  471804 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 10:59:53.962440  471804 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 10:59:53.962482  471804 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 10:59:54.148660  471804 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 10:59:54.148779  471804 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 10:59:55.649950  471804 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501381772s
	I1025 10:59:55.653526  471804 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 10:59:55.653639  471804 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1025 10:59:55.654252  471804 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 10:59:55.654356  471804 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 10:59:57.840417  471804 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.186426196s
	I1025 11:00:00.181827  471804 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.528207325s
	I1025 11:00:03.655448  471804 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 8.001791972s
	I1025 11:00:03.679707  471804 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 11:00:03.696439  471804 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 11:00:03.711918  471804 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 11:00:03.712147  471804 kubeadm.go:318] [mark-control-plane] Marking the node auto-759329 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 11:00:03.725160  471804 kubeadm.go:318] [bootstrap-token] Using token: 8gak69.s70srqn4njacqoj2
	I1025 11:00:03.728172  471804 out.go:252]   - Configuring RBAC rules ...
	I1025 11:00:03.728329  471804 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 11:00:03.733748  471804 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 11:00:03.747632  471804 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 11:00:03.754364  471804 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 11:00:03.758714  471804 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 11:00:03.763035  471804 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 11:00:04.063017  471804 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 11:00:04.540357  471804 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 11:00:05.063269  471804 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 11:00:05.064654  471804 kubeadm.go:318] 
	I1025 11:00:05.064737  471804 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 11:00:05.064750  471804 kubeadm.go:318] 
	I1025 11:00:05.064844  471804 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 11:00:05.064856  471804 kubeadm.go:318] 
	I1025 11:00:05.064883  471804 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 11:00:05.064951  471804 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 11:00:05.065010  471804 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 11:00:05.065019  471804 kubeadm.go:318] 
	I1025 11:00:05.065076  471804 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 11:00:05.065084  471804 kubeadm.go:318] 
	I1025 11:00:05.065134  471804 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 11:00:05.065142  471804 kubeadm.go:318] 
	I1025 11:00:05.065197  471804 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 11:00:05.065285  471804 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 11:00:05.065362  471804 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 11:00:05.065372  471804 kubeadm.go:318] 
	I1025 11:00:05.065460  471804 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 11:00:05.065554  471804 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 11:00:05.065564  471804 kubeadm.go:318] 
	I1025 11:00:05.065660  471804 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 8gak69.s70srqn4njacqoj2 \
	I1025 11:00:05.065773  471804 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:537c864aeffbd98a53a718d258f7bd1f28aa4a0d15716971a35abdfdaaeb28d5 \
	I1025 11:00:05.065799  471804 kubeadm.go:318] 	--control-plane 
	I1025 11:00:05.065808  471804 kubeadm.go:318] 
	I1025 11:00:05.065897  471804 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 11:00:05.065906  471804 kubeadm.go:318] 
	I1025 11:00:05.066017  471804 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 8gak69.s70srqn4njacqoj2 \
	I1025 11:00:05.066129  471804 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:537c864aeffbd98a53a718d258f7bd1f28aa4a0d15716971a35abdfdaaeb28d5 
	I1025 11:00:05.069736  471804 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1025 11:00:05.070011  471804 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 11:00:05.070126  471804 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 11:00:05.070151  471804 cni.go:84] Creating CNI manager for ""
	I1025 11:00:05.070163  471804 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 11:00:05.073585  471804 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 11:00:05.076615  471804 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 11:00:05.081421  471804 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 11:00:05.081448  471804 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 11:00:05.098550  471804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 11:00:05.631060  471804 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 11:00:05.631201  471804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 11:00:05.631270  471804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-759329 minikube.k8s.io/updated_at=2025_10_25T11_00_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689 minikube.k8s.io/name=auto-759329 minikube.k8s.io/primary=true
	I1025 11:00:05.663594  471804 ops.go:34] apiserver oom_adj: -16
	I1025 11:00:05.874182  471804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 11:00:06.374412  471804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 11:00:06.875201  471804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 11:00:07.375115  471804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 11:00:07.874611  471804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 11:00:08.375187  471804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Oct 25 10:59:54 no-preload-093313 crio[650]: time="2025-10-25T10:59:54.371950358Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a9a693d6-3013-478e-b7b6-417bbef272ad name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:59:54 no-preload-093313 crio[650]: time="2025-10-25T10:59:54.380421999Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e37cff44-5f8d-4e4b-8928-e13de79b3365 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:59:54 no-preload-093313 crio[650]: time="2025-10-25T10:59:54.385807926Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2vjq/dashboard-metrics-scraper" id=6c05ae64-8223-4f64-9f03-34f7210f9b02 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:59:54 no-preload-093313 crio[650]: time="2025-10-25T10:59:54.387130572Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:59:54 no-preload-093313 crio[650]: time="2025-10-25T10:59:54.400445505Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:59:54 no-preload-093313 crio[650]: time="2025-10-25T10:59:54.401254563Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:59:54 no-preload-093313 crio[650]: time="2025-10-25T10:59:54.421615106Z" level=info msg="Created container f1c50364f80a6c172e9ab40c700aecaed5832969281e3fafcc082a07d4f2ff68: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2vjq/dashboard-metrics-scraper" id=6c05ae64-8223-4f64-9f03-34f7210f9b02 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:59:54 no-preload-093313 crio[650]: time="2025-10-25T10:59:54.449171087Z" level=info msg="Starting container: f1c50364f80a6c172e9ab40c700aecaed5832969281e3fafcc082a07d4f2ff68" id=5218cfe0-18aa-4c28-bb64-a44c0d472706 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:59:54 no-preload-093313 crio[650]: time="2025-10-25T10:59:54.466140814Z" level=info msg="Started container" PID=1661 containerID=f1c50364f80a6c172e9ab40c700aecaed5832969281e3fafcc082a07d4f2ff68 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2vjq/dashboard-metrics-scraper id=5218cfe0-18aa-4c28-bb64-a44c0d472706 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5b781afab0162e41cce030bf1abee96ba76c927cb7901aaf10aa1e9874a84755
	Oct 25 10:59:54 no-preload-093313 conmon[1659]: conmon f1c50364f80a6c172e9a <ninfo>: container 1661 exited with status 1
	Oct 25 10:59:54 no-preload-093313 crio[650]: time="2025-10-25T10:59:54.716305141Z" level=info msg="Removing container: 640a4b158073a480b8c6774c890e4f73832120b1cc4d522f7bd22808d528816e" id=983f9d2a-e492-4168-b0b8-812b83d168c1 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:59:54 no-preload-093313 crio[650]: time="2025-10-25T10:59:54.729875437Z" level=info msg="Error loading conmon cgroup of container 640a4b158073a480b8c6774c890e4f73832120b1cc4d522f7bd22808d528816e: cgroup deleted" id=983f9d2a-e492-4168-b0b8-812b83d168c1 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:59:54 no-preload-093313 crio[650]: time="2025-10-25T10:59:54.738847522Z" level=info msg="Removed container 640a4b158073a480b8c6774c890e4f73832120b1cc4d522f7bd22808d528816e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2vjq/dashboard-metrics-scraper" id=983f9d2a-e492-4168-b0b8-812b83d168c1 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 11:00:00 no-preload-093313 crio[650]: time="2025-10-25T11:00:00.224553088Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 11:00:00 no-preload-093313 crio[650]: time="2025-10-25T11:00:00.232422893Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 11:00:00 no-preload-093313 crio[650]: time="2025-10-25T11:00:00.232625948Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 11:00:00 no-preload-093313 crio[650]: time="2025-10-25T11:00:00.232704816Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 11:00:00 no-preload-093313 crio[650]: time="2025-10-25T11:00:00.238528967Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 11:00:00 no-preload-093313 crio[650]: time="2025-10-25T11:00:00.238723315Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 11:00:00 no-preload-093313 crio[650]: time="2025-10-25T11:00:00.238819644Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 11:00:00 no-preload-093313 crio[650]: time="2025-10-25T11:00:00.2467834Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 11:00:00 no-preload-093313 crio[650]: time="2025-10-25T11:00:00.247003258Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 11:00:00 no-preload-093313 crio[650]: time="2025-10-25T11:00:00.247088461Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 11:00:00 no-preload-093313 crio[650]: time="2025-10-25T11:00:00.25462027Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 11:00:00 no-preload-093313 crio[650]: time="2025-10-25T11:00:00.254771607Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f1c50364f80a6       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago       Exited              dashboard-metrics-scraper   2                   5b781afab0162       dashboard-metrics-scraper-6ffb444bf9-k2vjq   kubernetes-dashboard
	35125532fae1d       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           19 seconds ago       Running             storage-provisioner         2                   7bc22231e165c       storage-provisioner                          kube-system
	9f492d4ffcca1       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago       Running             kubernetes-dashboard        0                   accfb2225db41       kubernetes-dashboard-855c9754f9-xrszz        kubernetes-dashboard
	01259e3c29c7a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           50 seconds ago       Running             coredns                     1                   46c8f3de3f7c3       coredns-66bc5c9577-c56mp                     kube-system
	58f9df9753fe7       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago       Running             busybox                     1                   e239afa568164       busybox                                      default
	3fba2d7bed036       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago       Running             kindnet-cni                 1                   7fb1b394f9e78       kindnet-6tbtt                                kube-system
	16d8aea6ffa68       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           50 seconds ago       Running             kube-proxy                  1                   a756e7bf8e7fc       kube-proxy-vlb79                             kube-system
	f3cd0358e70fd       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           50 seconds ago       Exited              storage-provisioner         1                   7bc22231e165c       storage-provisioner                          kube-system
	555a214631009       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   e4dbc9e42e264       etcd-no-preload-093313                       kube-system
	3dd46cc93a4d3       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   6db5691194bcf       kube-controller-manager-no-preload-093313    kube-system
	1abb0086bfd53       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   55bece34c1abb       kube-scheduler-no-preload-093313             kube-system
	2c3118fc8aba3       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   c9783f23fe6e4       kube-apiserver-no-preload-093313             kube-system
	
	
	==> coredns [01259e3c29c7a48a1bfeb65d5897aef0275e5a400f418985c64b8bf48d14a17b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53649 - 43618 "HINFO IN 4211319309414066805.1736044630763128359. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016766444s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-093313
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-093313
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=no-preload-093313
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_58_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:58:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-093313
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:59:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:59:38 +0000   Sat, 25 Oct 2025 10:58:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:59:38 +0000   Sat, 25 Oct 2025 10:58:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:59:38 +0000   Sat, 25 Oct 2025 10:58:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:59:38 +0000   Sat, 25 Oct 2025 10:58:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-093313
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                03f9066b-feaa-4e69-be40-1b2314524518
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-c56mp                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     116s
	  kube-system                 etcd-no-preload-093313                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m1s
	  kube-system                 kindnet-6tbtt                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      116s
	  kube-system                 kube-apiserver-no-preload-093313              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-no-preload-093313     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-vlb79                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-scheduler-no-preload-093313              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-k2vjq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xrszz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 114s                   kube-proxy       
	  Normal   Starting                 49s                    kube-proxy       
	  Warning  CgroupV1                 2m11s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node no-preload-093313 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node no-preload-093313 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m11s (x8 over 2m11s)  kubelet          Node no-preload-093313 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m1s                   kubelet          Node no-preload-093313 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m1s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m1s                   kubelet          Node no-preload-093313 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m1s                   kubelet          Node no-preload-093313 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m1s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           117s                   node-controller  Node no-preload-093313 event: Registered Node no-preload-093313 in Controller
	  Normal   NodeReady                100s                   kubelet          Node no-preload-093313 status is now: NodeReady
	  Normal   Starting                 64s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 64s)      kubelet          Node no-preload-093313 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 64s)      kubelet          Node no-preload-093313 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s (x8 over 64s)      kubelet          Node no-preload-093313 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                    node-controller  Node no-preload-093313 event: Registered Node no-preload-093313 in Controller
	
	
	==> dmesg <==
	[Oct25 10:37] overlayfs: idmapped layers are currently not supported
	[ +24.992751] overlayfs: idmapped layers are currently not supported
	[Oct25 10:38] overlayfs: idmapped layers are currently not supported
	[Oct25 10:39] overlayfs: idmapped layers are currently not supported
	[Oct25 10:40] overlayfs: idmapped layers are currently not supported
	[Oct25 10:41] overlayfs: idmapped layers are currently not supported
	[Oct25 10:43] overlayfs: idmapped layers are currently not supported
	[Oct25 10:45] overlayfs: idmapped layers are currently not supported
	[ +32.163680] overlayfs: idmapped layers are currently not supported
	[Oct25 10:47] overlayfs: idmapped layers are currently not supported
	[Oct25 10:49] overlayfs: idmapped layers are currently not supported
	[Oct25 10:50] overlayfs: idmapped layers are currently not supported
	[Oct25 10:51] overlayfs: idmapped layers are currently not supported
	[  +5.236078] overlayfs: idmapped layers are currently not supported
	[Oct25 10:52] overlayfs: idmapped layers are currently not supported
	[Oct25 10:53] overlayfs: idmapped layers are currently not supported
	[Oct25 10:54] overlayfs: idmapped layers are currently not supported
	[Oct25 10:55] overlayfs: idmapped layers are currently not supported
	[Oct25 10:56] overlayfs: idmapped layers are currently not supported
	[ +41.501413] overlayfs: idmapped layers are currently not supported
	[Oct25 10:57] overlayfs: idmapped layers are currently not supported
	[Oct25 10:58] overlayfs: idmapped layers are currently not supported
	[Oct25 10:59] overlayfs: idmapped layers are currently not supported
	[  +1.429017] overlayfs: idmapped layers are currently not supported
	[ +48.923730] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [555a214631009b8c9e0ad146cf6605f03eec6b67635b74eb9d3950940eecf3f5] <==
	{"level":"warn","ts":"2025-10-25T10:59:14.016172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.066280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.116189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.156027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.194577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.233842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.286705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.315858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.337876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.364244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.393182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.431010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.464404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.498637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.551706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.561689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.593090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.677820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.720051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.762826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.786806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.832038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.838030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.871874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:15.060560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39764","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:00:10 up  2:42,  0 user,  load average: 4.58, 3.89, 3.17
	Linux no-preload-093313 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3fba2d7bed036af45d7420433508986b6404a9b71238141c3e885894291bec15] <==
	I1025 10:59:20.015030       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:59:20.015275       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:59:20.015433       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:59:20.015446       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:59:20.015458       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:59:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:59:20.224280       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:59:20.238155       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:59:20.238188       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:59:20.238292       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:59:50.225218       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 10:59:50.230618       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 10:59:50.230740       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 10:59:50.230838       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1025 10:59:51.138454       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:59:51.138514       1 metrics.go:72] Registering metrics
	I1025 10:59:51.138594       1 controller.go:711] "Syncing nftables rules"
	I1025 11:00:00.224113       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 11:00:00.224219       1 main.go:301] handling current node
	I1025 11:00:10.227239       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 11:00:10.227279       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2c3118fc8aba39e254ed98a90027a52eb3bc4eb55ca37aed37f0638d414d5a7c] <==
	I1025 10:59:17.337890       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:59:17.466793       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 10:59:17.378532       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 10:59:17.466937       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:59:17.493704       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 10:59:17.494170       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:59:17.503068       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 10:59:17.503148       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 10:59:17.503328       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 10:59:17.378169       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 10:59:17.504911       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:59:17.505161       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 10:59:17.547388       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1025 10:59:17.590904       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 10:59:17.832780       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:59:19.065800       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:59:19.262000       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:59:19.408503       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:59:19.489876       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:59:19.513074       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:59:19.660092       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.145.4"}
	I1025 10:59:19.696092       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.187.201"}
	I1025 10:59:21.547934       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:59:21.956068       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:59:22.009468       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3dd46cc93a4d340b21d6515927392c6d678062f1fd4a8eb33513a013a750df3f] <==
	I1025 10:59:21.549479       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 10:59:21.561255       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:59:21.562517       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 10:59:21.578217       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:59:21.578298       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:59:21.578329       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:59:21.579034       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:59:21.579163       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 10:59:21.579314       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:59:21.582506       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:59:21.582619       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:59:21.589477       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 10:59:21.595265       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:59:21.598631       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:59:21.598753       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 10:59:21.598917       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 10:59:21.599142       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 10:59:21.600517       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 10:59:21.601107       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 10:59:21.602501       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 10:59:21.602644       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 10:59:21.603854       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 10:59:21.606299       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 10:59:21.609614       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 10:59:21.614315       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	
	
	==> kube-proxy [16d8aea6ffa68886212ab4f1d30e95a58ec710ae4e0e9bad811855e16be7b0b8] <==
	I1025 10:59:20.035063       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:59:20.120442       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:59:20.222606       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:59:20.222647       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 10:59:20.222731       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:59:20.309706       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:59:20.309816       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:59:20.328899       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:59:20.330423       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:59:20.330953       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:59:20.332263       1 config.go:200] "Starting service config controller"
	I1025 10:59:20.332321       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:59:20.332361       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:59:20.332389       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:59:20.332425       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:59:20.332451       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:59:20.335476       1 config.go:309] "Starting node config controller"
	I1025 10:59:20.336627       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:59:20.336708       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:59:20.432998       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:59:20.433100       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:59:20.433126       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1abb0086bfd53a0a24fd6a972d03dfa536774e2a3214e984b2913d5d42eb1584] <==
	I1025 10:59:12.212899       1 serving.go:386] Generated self-signed cert in-memory
	W1025 10:59:17.402411       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 10:59:17.402470       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 10:59:17.402489       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 10:59:17.402498       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 10:59:17.640279       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:59:17.684227       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:59:17.692796       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:59:17.693020       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:59:17.731125       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:59:17.693038       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:59:18.034243       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:59:22 no-preload-093313 kubelet[772]: I1025 10:59:22.295292     772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtsww\" (UniqueName: \"kubernetes.io/projected/609a0c23-fcd6-4966-b4dd-6411fdf189f7-kube-api-access-qtsww\") pod \"kubernetes-dashboard-855c9754f9-xrszz\" (UID: \"609a0c23-fcd6-4966-b4dd-6411fdf189f7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xrszz"
	Oct 25 10:59:22 no-preload-093313 kubelet[772]: I1025 10:59:22.295380     772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/386db8b1-28c7-49b0-b999-71145f94a1f7-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-k2vjq\" (UID: \"386db8b1-28c7-49b0-b999-71145f94a1f7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2vjq"
	Oct 25 10:59:22 no-preload-093313 kubelet[772]: I1025 10:59:22.295406     772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/609a0c23-fcd6-4966-b4dd-6411fdf189f7-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-xrszz\" (UID: \"609a0c23-fcd6-4966-b4dd-6411fdf189f7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xrszz"
	Oct 25 10:59:22 no-preload-093313 kubelet[772]: I1025 10:59:22.295535     772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zf57\" (UniqueName: \"kubernetes.io/projected/386db8b1-28c7-49b0-b999-71145f94a1f7-kube-api-access-5zf57\") pod \"dashboard-metrics-scraper-6ffb444bf9-k2vjq\" (UID: \"386db8b1-28c7-49b0-b999-71145f94a1f7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2vjq"
	Oct 25 10:59:23 no-preload-093313 kubelet[772]: W1025 10:59:23.189490     772 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b/crio-accfb2225db4177b91eb1b3662d654d516331950a39442dbb0a1f710b17e7d4b WatchSource:0}: Error finding container accfb2225db4177b91eb1b3662d654d516331950a39442dbb0a1f710b17e7d4b: Status 404 returned error can't find the container with id accfb2225db4177b91eb1b3662d654d516331950a39442dbb0a1f710b17e7d4b
	Oct 25 10:59:23 no-preload-093313 kubelet[772]: W1025 10:59:23.211855     772 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b/crio-5b781afab0162e41cce030bf1abee96ba76c927cb7901aaf10aa1e9874a84755 WatchSource:0}: Error finding container 5b781afab0162e41cce030bf1abee96ba76c927cb7901aaf10aa1e9874a84755: Status 404 returned error can't find the container with id 5b781afab0162e41cce030bf1abee96ba76c927cb7901aaf10aa1e9874a84755
	Oct 25 10:59:31 no-preload-093313 kubelet[772]: I1025 10:59:31.665969     772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xrszz" podStartSLOduration=2.163621599 podStartE2EDuration="9.665949923s" podCreationTimestamp="2025-10-25 10:59:22 +0000 UTC" firstStartedPulling="2025-10-25 10:59:23.19504164 +0000 UTC m=+17.448288298" lastFinishedPulling="2025-10-25 10:59:30.697369956 +0000 UTC m=+24.950616622" observedRunningTime="2025-10-25 10:59:31.665868372 +0000 UTC m=+25.919115079" watchObservedRunningTime="2025-10-25 10:59:31.665949923 +0000 UTC m=+25.919196581"
	Oct 25 10:59:36 no-preload-093313 kubelet[772]: I1025 10:59:36.648991     772 scope.go:117] "RemoveContainer" containerID="562baf6bb10a6069b8d5062293b9b4d7b207ce3e429c33e9aaeb2a3773e6c336"
	Oct 25 10:59:37 no-preload-093313 kubelet[772]: I1025 10:59:37.654528     772 scope.go:117] "RemoveContainer" containerID="562baf6bb10a6069b8d5062293b9b4d7b207ce3e429c33e9aaeb2a3773e6c336"
	Oct 25 10:59:37 no-preload-093313 kubelet[772]: I1025 10:59:37.654824     772 scope.go:117] "RemoveContainer" containerID="640a4b158073a480b8c6774c890e4f73832120b1cc4d522f7bd22808d528816e"
	Oct 25 10:59:37 no-preload-093313 kubelet[772]: E1025 10:59:37.654964     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k2vjq_kubernetes-dashboard(386db8b1-28c7-49b0-b999-71145f94a1f7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2vjq" podUID="386db8b1-28c7-49b0-b999-71145f94a1f7"
	Oct 25 10:59:38 no-preload-093313 kubelet[772]: I1025 10:59:38.658722     772 scope.go:117] "RemoveContainer" containerID="640a4b158073a480b8c6774c890e4f73832120b1cc4d522f7bd22808d528816e"
	Oct 25 10:59:38 no-preload-093313 kubelet[772]: E1025 10:59:38.658901     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k2vjq_kubernetes-dashboard(386db8b1-28c7-49b0-b999-71145f94a1f7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2vjq" podUID="386db8b1-28c7-49b0-b999-71145f94a1f7"
	Oct 25 10:59:43 no-preload-093313 kubelet[772]: I1025 10:59:43.140038     772 scope.go:117] "RemoveContainer" containerID="640a4b158073a480b8c6774c890e4f73832120b1cc4d522f7bd22808d528816e"
	Oct 25 10:59:43 no-preload-093313 kubelet[772]: E1025 10:59:43.140613     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k2vjq_kubernetes-dashboard(386db8b1-28c7-49b0-b999-71145f94a1f7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2vjq" podUID="386db8b1-28c7-49b0-b999-71145f94a1f7"
	Oct 25 10:59:50 no-preload-093313 kubelet[772]: I1025 10:59:50.690701     772 scope.go:117] "RemoveContainer" containerID="f3cd0358e70fd7964d67639fe6cd07db37eb01990ca5aa0384c07de252a3dd21"
	Oct 25 10:59:54 no-preload-093313 kubelet[772]: I1025 10:59:54.370206     772 scope.go:117] "RemoveContainer" containerID="640a4b158073a480b8c6774c890e4f73832120b1cc4d522f7bd22808d528816e"
	Oct 25 10:59:54 no-preload-093313 kubelet[772]: I1025 10:59:54.705286     772 scope.go:117] "RemoveContainer" containerID="640a4b158073a480b8c6774c890e4f73832120b1cc4d522f7bd22808d528816e"
	Oct 25 10:59:54 no-preload-093313 kubelet[772]: I1025 10:59:54.705759     772 scope.go:117] "RemoveContainer" containerID="f1c50364f80a6c172e9ab40c700aecaed5832969281e3fafcc082a07d4f2ff68"
	Oct 25 10:59:54 no-preload-093313 kubelet[772]: E1025 10:59:54.706060     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k2vjq_kubernetes-dashboard(386db8b1-28c7-49b0-b999-71145f94a1f7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2vjq" podUID="386db8b1-28c7-49b0-b999-71145f94a1f7"
	Oct 25 11:00:03 no-preload-093313 kubelet[772]: I1025 11:00:03.139545     772 scope.go:117] "RemoveContainer" containerID="f1c50364f80a6c172e9ab40c700aecaed5832969281e3fafcc082a07d4f2ff68"
	Oct 25 11:00:03 no-preload-093313 kubelet[772]: E1025 11:00:03.140510     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k2vjq_kubernetes-dashboard(386db8b1-28c7-49b0-b999-71145f94a1f7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2vjq" podUID="386db8b1-28c7-49b0-b999-71145f94a1f7"
	Oct 25 11:00:07 no-preload-093313 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 11:00:07 no-preload-093313 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 11:00:07 no-preload-093313 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [9f492d4ffcca1be504956535aafe86832a25e88e3dd3bb6655bcd469185729d8] <==
	2025/10/25 10:59:30 Using namespace: kubernetes-dashboard
	2025/10/25 10:59:30 Using in-cluster config to connect to apiserver
	2025/10/25 10:59:30 Using secret token for csrf signing
	2025/10/25 10:59:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 10:59:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 10:59:30 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 10:59:30 Generating JWE encryption key
	2025/10/25 10:59:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 10:59:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 10:59:31 Initializing JWE encryption key from synchronized object
	2025/10/25 10:59:31 Creating in-cluster Sidecar client
	2025/10/25 10:59:31 Serving insecurely on HTTP port: 9090
	2025/10/25 10:59:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 11:00:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:59:30 Starting overwatch
	
	
	==> storage-provisioner [35125532fae1d125369f6e7bbbd7c735a67cc2fa39af4d0a1b7697175a3ea7bf] <==
	I1025 10:59:50.787885       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:59:50.803904       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:59:50.804033       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 10:59:50.808403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:59:54.264607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:59:58.525355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 11:00:02.137484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 11:00:05.197255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 11:00:08.220465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 11:00:08.232821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 11:00:08.233953       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 11:00:08.245520       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-093313_cbf79b3a-9c46-4871-bea1-08477c52ed67!
	I1025 11:00:08.245467       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"77448026-dc1c-4f90-a3be-98f6a3fbe47d", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-093313_cbf79b3a-9c46-4871-bea1-08477c52ed67 became leader
	W1025 11:00:08.246284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 11:00:08.253924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 11:00:08.354367       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-093313_cbf79b3a-9c46-4871-bea1-08477c52ed67!
	W1025 11:00:10.262601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 11:00:10.278625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f3cd0358e70fd7964d67639fe6cd07db37eb01990ca5aa0384c07de252a3dd21] <==
	I1025 10:59:19.784973       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 10:59:49.786719       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-093313 -n no-preload-093313
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-093313 -n no-preload-093313: exit status 2 (627.247638ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-093313 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-093313
helpers_test.go:243: (dbg) docker inspect no-preload-093313:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b",
	        "Created": "2025-10-25T10:57:28.426935477Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 467529,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:58:58.41906842Z",
	            "FinishedAt": "2025-10-25T10:58:57.44640982Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b/hostname",
	        "HostsPath": "/var/lib/docker/containers/6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b/hosts",
	        "LogPath": "/var/lib/docker/containers/6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b/6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b-json.log",
	        "Name": "/no-preload-093313",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-093313:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-093313",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b",
	                "LowerDir": "/var/lib/docker/overlay2/1f5ea6e91c8f355c29623f9f36931296945f0bb9f9437babb5fc7356e43ab032-init/diff:/var/lib/docker/overlay2/a746402fafa86965bd9394d930bb96ec16982037f9f5e7f4f1de6d09576ff851/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f5ea6e91c8f355c29623f9f36931296945f0bb9f9437babb5fc7356e43ab032/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f5ea6e91c8f355c29623f9f36931296945f0bb9f9437babb5fc7356e43ab032/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f5ea6e91c8f355c29623f9f36931296945f0bb9f9437babb5fc7356e43ab032/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-093313",
	                "Source": "/var/lib/docker/volumes/no-preload-093313/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-093313",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-093313",
	                "name.minikube.sigs.k8s.io": "no-preload-093313",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1ec00c19784fdb83cf50276edf46ccec2031ca5ed419f03dd7df8e4283c8edd5",
	            "SandboxKey": "/var/run/docker/netns/1ec00c19784f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-093313": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:c5:4d:3d:82:09",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2d822b8f1fe897a1280d2399b042700d5489e4df686ead1ec0a23045fa9c8398",
	                    "EndpointID": "0137824d5156f9a5b827e9eb194e074ccee7f693ce82464e0a929d483a3b17fa",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-093313",
	                        "6e8e2d881e7d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-093313 -n no-preload-093313
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-093313 -n no-preload-093313: exit status 2 (430.053895ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-093313 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-093313 logs -n 25: (1.397341657s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p default-k8s-diff-port-223394 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-223394                                                                                                                                                                                                               │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ delete  │ -p default-k8s-diff-port-223394                                                                                                                                                                                                               │ default-k8s-diff-port-223394 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ delete  │ -p disable-driver-mounts-487220                                                                                                                                                                                                               │ disable-driver-mounts-487220 │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ start   │ -p no-preload-093313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:58 UTC │
	│ image   │ embed-certs-348342 image list --format=json                                                                                                                                                                                                   │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │ 25 Oct 25 10:57 UTC │
	│ pause   │ -p embed-certs-348342 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:57 UTC │                     │
	│ delete  │ -p embed-certs-348342                                                                                                                                                                                                                         │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ delete  │ -p embed-certs-348342                                                                                                                                                                                                                         │ embed-certs-348342           │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ start   │ -p newest-cni-374679 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ addons  │ enable metrics-server -p no-preload-093313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │                     │
	│ stop    │ -p no-preload-093313 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ addons  │ enable metrics-server -p newest-cni-374679 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │                     │
	│ stop    │ -p newest-cni-374679 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ addons  │ enable dashboard -p newest-cni-374679 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ start   │ -p newest-cni-374679 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:59 UTC │
	│ addons  │ enable dashboard -p no-preload-093313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:58 UTC │
	│ start   │ -p no-preload-093313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 10:58 UTC │ 25 Oct 25 10:59 UTC │
	│ image   │ newest-cni-374679 image list --format=json                                                                                                                                                                                                    │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:59 UTC │ 25 Oct 25 10:59 UTC │
	│ pause   │ -p newest-cni-374679 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:59 UTC │                     │
	│ delete  │ -p newest-cni-374679                                                                                                                                                                                                                          │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:59 UTC │ 25 Oct 25 10:59 UTC │
	│ delete  │ -p newest-cni-374679                                                                                                                                                                                                                          │ newest-cni-374679            │ jenkins │ v1.37.0 │ 25 Oct 25 10:59 UTC │ 25 Oct 25 10:59 UTC │
	│ start   │ -p auto-759329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-759329                  │ jenkins │ v1.37.0 │ 25 Oct 25 10:59 UTC │                     │
	│ image   │ no-preload-093313 image list --format=json                                                                                                                                                                                                    │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 11:00 UTC │ 25 Oct 25 11:00 UTC │
	│ pause   │ -p no-preload-093313 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-093313            │ jenkins │ v1.37.0 │ 25 Oct 25 11:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:59:28
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:59:28.676606  471804 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:59:28.676817  471804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:59:28.676844  471804 out.go:374] Setting ErrFile to fd 2...
	I1025 10:59:28.676863  471804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:59:28.677199  471804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:59:28.677795  471804 out.go:368] Setting JSON to false
	I1025 10:59:28.679052  471804 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9720,"bootTime":1761380249,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 10:59:28.679182  471804 start.go:141] virtualization:  
	I1025 10:59:28.686515  471804 out.go:179] * [auto-759329] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:59:28.690203  471804 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:59:28.690346  471804 notify.go:220] Checking for updates...
	I1025 10:59:28.696746  471804 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:59:28.700021  471804 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:59:28.703464  471804 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 10:59:28.706574  471804 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:59:28.709934  471804 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:59:28.713836  471804 config.go:182] Loaded profile config "no-preload-093313": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:59:28.713931  471804 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:59:28.761175  471804 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:59:28.761316  471804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:59:28.861339  471804 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:59:28.849915244 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:59:28.861457  471804 docker.go:318] overlay module found
	I1025 10:59:28.865763  471804 out.go:179] * Using the docker driver based on user configuration
	I1025 10:59:28.868840  471804 start.go:305] selected driver: docker
	I1025 10:59:28.868864  471804 start.go:925] validating driver "docker" against <nil>
	I1025 10:59:28.868880  471804 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:59:28.869581  471804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:59:28.970167  471804 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:59:28.956406711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:59:28.970324  471804 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 10:59:28.970587  471804 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:59:28.974318  471804 out.go:179] * Using Docker driver with root privileges
	I1025 10:59:28.977632  471804 cni.go:84] Creating CNI manager for ""
	I1025 10:59:28.977710  471804 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:59:28.977724  471804 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:59:28.977959  471804 start.go:349] cluster config:
	{Name:auto-759329 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-759329 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1025 10:59:28.981743  471804 out.go:179] * Starting "auto-759329" primary control-plane node in "auto-759329" cluster
	I1025 10:59:28.985053  471804 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:59:28.988497  471804 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:59:28.991836  471804 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:59:28.991896  471804 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 10:59:28.991935  471804 cache.go:58] Caching tarball of preloaded images
	I1025 10:59:28.991997  471804 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:59:28.992257  471804 preload.go:233] Found /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:59:28.992274  471804 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:59:28.992391  471804 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/config.json ...
	I1025 10:59:28.992415  471804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/config.json: {Name:mk9a099f923bcbe085931afb7521cd2dae64de56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:29.017427  471804 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:59:29.017446  471804 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:59:29.017459  471804 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:59:29.017493  471804 start.go:360] acquireMachinesLock for auto-759329: {Name:mk57d9b9df7c393b7f55fadb9067894f3795e532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:59:29.017578  471804 start.go:364] duration metric: took 69.326µs to acquireMachinesLock for "auto-759329"
	I1025 10:59:29.017602  471804 start.go:93] Provisioning new machine with config: &{Name:auto-759329 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-759329 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:59:29.017679  471804 start.go:125] createHost starting for "" (driver="docker")
	W1025 10:59:28.583372  467402 pod_ready.go:104] pod "coredns-66bc5c9577-c56mp" is not "Ready", error: <nil>
	W1025 10:59:31.082608  467402 pod_ready.go:104] pod "coredns-66bc5c9577-c56mp" is not "Ready", error: <nil>
	I1025 10:59:29.021724  471804 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:59:29.022073  471804 start.go:159] libmachine.API.Create for "auto-759329" (driver="docker")
	I1025 10:59:29.022115  471804 client.go:168] LocalClient.Create starting
	I1025 10:59:29.022210  471804 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem
	I1025 10:59:29.022262  471804 main.go:141] libmachine: Decoding PEM data...
	I1025 10:59:29.022283  471804 main.go:141] libmachine: Parsing certificate...
	I1025 10:59:29.022362  471804 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem
	I1025 10:59:29.022386  471804 main.go:141] libmachine: Decoding PEM data...
	I1025 10:59:29.022397  471804 main.go:141] libmachine: Parsing certificate...
	I1025 10:59:29.022762  471804 cli_runner.go:164] Run: docker network inspect auto-759329 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:59:29.043711  471804 cli_runner.go:211] docker network inspect auto-759329 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:59:29.043809  471804 network_create.go:284] running [docker network inspect auto-759329] to gather additional debugging logs...
	I1025 10:59:29.043833  471804 cli_runner.go:164] Run: docker network inspect auto-759329
	W1025 10:59:29.062269  471804 cli_runner.go:211] docker network inspect auto-759329 returned with exit code 1
	I1025 10:59:29.062298  471804 network_create.go:287] error running [docker network inspect auto-759329]: docker network inspect auto-759329: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-759329 not found
	I1025 10:59:29.062312  471804 network_create.go:289] output of [docker network inspect auto-759329]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-759329 not found
	
	** /stderr **
	I1025 10:59:29.062427  471804 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:59:29.087921  471804 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2218a4d410c8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:56:a0:c3:54:c6:1f} reservation:<nil>}
	I1025 10:59:29.088290  471804 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-249eaf2d238d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:87:b9:4d:4c:0d} reservation:<nil>}
	I1025 10:59:29.088530  471804 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-210d4b236ff6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:3a:d5:32:45:e6:85} reservation:<nil>}
	I1025 10:59:29.088943  471804 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a58720}
	I1025 10:59:29.088978  471804 network_create.go:124] attempt to create docker network auto-759329 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 10:59:29.089040  471804 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-759329 auto-759329
	I1025 10:59:29.171994  471804 network_create.go:108] docker network auto-759329 192.168.76.0/24 created
	I1025 10:59:29.172030  471804 kic.go:121] calculated static IP "192.168.76.2" for the "auto-759329" container
	I1025 10:59:29.172113  471804 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:59:29.207652  471804 cli_runner.go:164] Run: docker volume create auto-759329 --label name.minikube.sigs.k8s.io=auto-759329 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:59:29.235021  471804 oci.go:103] Successfully created a docker volume auto-759329
	I1025 10:59:29.235154  471804 cli_runner.go:164] Run: docker run --rm --name auto-759329-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-759329 --entrypoint /usr/bin/test -v auto-759329:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:59:30.292800  471804 cli_runner.go:217] Completed: docker run --rm --name auto-759329-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-759329 --entrypoint /usr/bin/test -v auto-759329:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (1.057595676s)
	I1025 10:59:30.292834  471804 oci.go:107] Successfully prepared a docker volume auto-759329
	I1025 10:59:30.292865  471804 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:59:30.292887  471804 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 10:59:30.292958  471804 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-759329:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1025 10:59:33.578403  467402 pod_ready.go:104] pod "coredns-66bc5c9577-c56mp" is not "Ready", error: <nil>
	W1025 10:59:35.578864  467402 pod_ready.go:104] pod "coredns-66bc5c9577-c56mp" is not "Ready", error: <nil>
	W1025 10:59:37.579496  467402 pod_ready.go:104] pod "coredns-66bc5c9577-c56mp" is not "Ready", error: <nil>
	I1025 10:59:36.181138  471804 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-759329:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.888138562s)
	I1025 10:59:36.181175  471804 kic.go:203] duration metric: took 5.88828418s to extract preloaded images to volume ...
	W1025 10:59:36.181304  471804 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 10:59:36.181421  471804 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 10:59:36.268629  471804 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-759329 --name auto-759329 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-759329 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-759329 --network auto-759329 --ip 192.168.76.2 --volume auto-759329:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 10:59:36.632424  471804 cli_runner.go:164] Run: docker container inspect auto-759329 --format={{.State.Running}}
	I1025 10:59:36.657883  471804 cli_runner.go:164] Run: docker container inspect auto-759329 --format={{.State.Status}}
	I1025 10:59:36.690373  471804 cli_runner.go:164] Run: docker exec auto-759329 stat /var/lib/dpkg/alternatives/iptables
	I1025 10:59:36.755819  471804 oci.go:144] the created container "auto-759329" has a running status.
	I1025 10:59:36.755862  471804 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/auto-759329/id_rsa...
	I1025 10:59:37.526650  471804 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-259409/.minikube/machines/auto-759329/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 10:59:37.546280  471804 cli_runner.go:164] Run: docker container inspect auto-759329 --format={{.State.Status}}
	I1025 10:59:37.564894  471804 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 10:59:37.564919  471804 kic_runner.go:114] Args: [docker exec --privileged auto-759329 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 10:59:37.617272  471804 cli_runner.go:164] Run: docker container inspect auto-759329 --format={{.State.Status}}
	I1025 10:59:37.637628  471804 machine.go:93] provisionDockerMachine start ...
	I1025 10:59:37.637738  471804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-759329
	I1025 10:59:37.666759  471804 main.go:141] libmachine: Using SSH client type: native
	I1025 10:59:37.668151  471804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1025 10:59:37.668168  471804 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:59:37.668809  471804 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53490->127.0.0.1:33458: read: connection reset by peer
	W1025 10:59:40.078353  467402 pod_ready.go:104] pod "coredns-66bc5c9577-c56mp" is not "Ready", error: <nil>
	W1025 10:59:42.089316  467402 pod_ready.go:104] pod "coredns-66bc5c9577-c56mp" is not "Ready", error: <nil>
	I1025 10:59:40.821463  471804 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-759329
	
	I1025 10:59:40.821486  471804 ubuntu.go:182] provisioning hostname "auto-759329"
	I1025 10:59:40.821547  471804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-759329
	I1025 10:59:40.842092  471804 main.go:141] libmachine: Using SSH client type: native
	I1025 10:59:40.842446  471804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1025 10:59:40.842463  471804 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-759329 && echo "auto-759329" | sudo tee /etc/hostname
	I1025 10:59:41.010622  471804 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-759329
	
	I1025 10:59:41.010733  471804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-759329
	I1025 10:59:41.036275  471804 main.go:141] libmachine: Using SSH client type: native
	I1025 10:59:41.036689  471804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1025 10:59:41.036734  471804 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-759329' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-759329/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-759329' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:59:41.201962  471804 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:59:41.202053  471804 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-259409/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-259409/.minikube}
	I1025 10:59:41.202127  471804 ubuntu.go:190] setting up certificates
	I1025 10:59:41.202159  471804 provision.go:84] configureAuth start
	I1025 10:59:41.202263  471804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-759329
	I1025 10:59:41.220307  471804 provision.go:143] copyHostCerts
	I1025 10:59:41.220376  471804 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem, removing ...
	I1025 10:59:41.220385  471804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem
	I1025 10:59:41.220466  471804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/key.pem (1675 bytes)
	I1025 10:59:41.220563  471804 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem, removing ...
	I1025 10:59:41.220568  471804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem
	I1025 10:59:41.220599  471804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/ca.pem (1078 bytes)
	I1025 10:59:41.220656  471804 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem, removing ...
	I1025 10:59:41.220661  471804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem
	I1025 10:59:41.220685  471804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-259409/.minikube/cert.pem (1123 bytes)
	I1025 10:59:41.220735  471804 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem org=jenkins.auto-759329 san=[127.0.0.1 192.168.76.2 auto-759329 localhost minikube]
	I1025 10:59:41.640080  471804 provision.go:177] copyRemoteCerts
	I1025 10:59:41.640150  471804 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:59:41.640194  471804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-759329
	I1025 10:59:41.658689  471804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/auto-759329/id_rsa Username:docker}
	I1025 10:59:41.761886  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 10:59:41.782102  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1025 10:59:41.801568  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:59:41.819600  471804 provision.go:87] duration metric: took 617.401419ms to configureAuth
	I1025 10:59:41.819629  471804 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:59:41.819818  471804 config.go:182] Loaded profile config "auto-759329": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:59:41.819935  471804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-759329
	I1025 10:59:41.837197  471804 main.go:141] libmachine: Using SSH client type: native
	I1025 10:59:41.837565  471804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1025 10:59:41.837596  471804 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:59:42.204209  471804 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:59:42.204251  471804 machine.go:96] duration metric: took 4.566598868s to provisionDockerMachine
	I1025 10:59:42.204264  471804 client.go:171] duration metric: took 13.182136004s to LocalClient.Create
	I1025 10:59:42.204290  471804 start.go:167] duration metric: took 13.182219312s to libmachine.API.Create "auto-759329"
	I1025 10:59:42.204300  471804 start.go:293] postStartSetup for "auto-759329" (driver="docker")
	I1025 10:59:42.204311  471804 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:59:42.204414  471804 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:59:42.204466  471804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-759329
	I1025 10:59:42.232944  471804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/auto-759329/id_rsa Username:docker}
	I1025 10:59:42.343216  471804 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:59:42.347649  471804 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:59:42.347676  471804 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:59:42.347688  471804 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/addons for local assets ...
	I1025 10:59:42.347743  471804 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-259409/.minikube/files for local assets ...
	I1025 10:59:42.347844  471804 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem -> 2612562.pem in /etc/ssl/certs
	I1025 10:59:42.347949  471804 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:59:42.358641  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:59:42.378520  471804 start.go:296] duration metric: took 174.202137ms for postStartSetup
	I1025 10:59:42.378968  471804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-759329
	I1025 10:59:42.396258  471804 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/config.json ...
	I1025 10:59:42.396546  471804 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:59:42.396601  471804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-759329
	I1025 10:59:42.420359  471804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/auto-759329/id_rsa Username:docker}
	I1025 10:59:42.528030  471804 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:59:42.532902  471804 start.go:128] duration metric: took 13.515206258s to createHost
	I1025 10:59:42.532927  471804 start.go:83] releasing machines lock for "auto-759329", held for 13.515339551s
	I1025 10:59:42.532998  471804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-759329
	I1025 10:59:42.549664  471804 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:59:42.549742  471804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-759329
	I1025 10:59:42.549664  471804 ssh_runner.go:195] Run: cat /version.json
	I1025 10:59:42.550026  471804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-759329
	I1025 10:59:42.582902  471804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/auto-759329/id_rsa Username:docker}
	I1025 10:59:42.586062  471804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/auto-759329/id_rsa Username:docker}
	I1025 10:59:42.689949  471804 ssh_runner.go:195] Run: systemctl --version
	I1025 10:59:42.803836  471804 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:59:42.854306  471804 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:59:42.859571  471804 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:59:42.859707  471804 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:59:42.890748  471804 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 10:59:42.890823  471804 start.go:495] detecting cgroup driver to use...
	I1025 10:59:42.890867  471804 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:59:42.890927  471804 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:59:42.908428  471804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:59:42.921439  471804 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:59:42.921504  471804 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:59:42.939271  471804 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:59:42.957630  471804 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:59:43.091505  471804 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:59:43.227750  471804 docker.go:234] disabling docker service ...
	I1025 10:59:43.227860  471804 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:59:43.249322  471804 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:59:43.262716  471804 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:59:43.387687  471804 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:59:43.516973  471804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:59:43.530451  471804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:59:43.545890  471804 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:59:43.546039  471804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:43.555962  471804 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:59:43.556084  471804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:43.565343  471804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:43.577446  471804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:43.587144  471804 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:59:43.595362  471804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:43.604657  471804 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:43.618609  471804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:59:43.628001  471804 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:59:43.635716  471804 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:59:43.643963  471804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:59:43.762457  471804 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:59:43.911735  471804 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:59:43.911869  471804 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:59:43.916710  471804 start.go:563] Will wait 60s for crictl version
	I1025 10:59:43.916823  471804 ssh_runner.go:195] Run: which crictl
	I1025 10:59:43.920421  471804 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:59:43.949360  471804 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:59:43.949450  471804 ssh_runner.go:195] Run: crio --version
	I1025 10:59:43.978054  471804 ssh_runner.go:195] Run: crio --version
	I1025 10:59:44.014942  471804 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:59:44.018086  471804 cli_runner.go:164] Run: docker network inspect auto-759329 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:59:44.038057  471804 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 10:59:44.042465  471804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:59:44.052906  471804 kubeadm.go:883] updating cluster {Name:auto-759329 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-759329 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:59:44.053018  471804 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:59:44.053088  471804 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:59:44.091703  471804 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:59:44.091728  471804 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:59:44.091789  471804 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:59:44.117273  471804 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:59:44.117297  471804 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:59:44.117305  471804 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1025 10:59:44.117399  471804 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-759329 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-759329 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:59:44.117481  471804 ssh_runner.go:195] Run: crio config
	I1025 10:59:44.179445  471804 cni.go:84] Creating CNI manager for ""
	I1025 10:59:44.179471  471804 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:59:44.179496  471804 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:59:44.179519  471804 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-759329 NodeName:auto-759329 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:59:44.179645  471804 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-759329"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:59:44.179722  471804 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:59:44.187770  471804 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:59:44.187850  471804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:59:44.198936  471804 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1025 10:59:44.211753  471804 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:59:44.224743  471804 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1025 10:59:44.237332  471804 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:59:44.240908  471804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:59:44.250874  471804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:59:44.375900  471804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:59:44.394650  471804 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329 for IP: 192.168.76.2
	I1025 10:59:44.394673  471804 certs.go:195] generating shared ca certs ...
	I1025 10:59:44.394693  471804 certs.go:227] acquiring lock for ca certs: {Name:mk3a546246434237f1c9b019a2f7b74e3336d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:44.394834  471804 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key
	I1025 10:59:44.394907  471804 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key
	I1025 10:59:44.394924  471804 certs.go:257] generating profile certs ...
	I1025 10:59:44.394981  471804 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/client.key
	I1025 10:59:44.395011  471804 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/client.crt with IP's: []
	I1025 10:59:44.930305  471804 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/client.crt ...
	I1025 10:59:44.930345  471804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/client.crt: {Name:mk05b73ea04ab5ee129def3316d21b6b2b287e96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:44.930546  471804 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/client.key ...
	I1025 10:59:44.930560  471804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/client.key: {Name:mk877bd223d52053289e0941866975723f319f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:44.930655  471804 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/apiserver.key.b622d078
	I1025 10:59:44.930672  471804 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/apiserver.crt.b622d078 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1025 10:59:45.243268  471804 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/apiserver.crt.b622d078 ...
	I1025 10:59:45.243309  471804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/apiserver.crt.b622d078: {Name:mk1cf1e7061abee4b0dde352272cd5871264daee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:45.243516  471804 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/apiserver.key.b622d078 ...
	I1025 10:59:45.243534  471804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/apiserver.key.b622d078: {Name:mkb93fc7764be6596c71c130fc19807c2c97aeb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:45.243646  471804 certs.go:382] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/apiserver.crt.b622d078 -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/apiserver.crt
	I1025 10:59:45.243743  471804 certs.go:386] copying /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/apiserver.key.b622d078 -> /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/apiserver.key
	I1025 10:59:45.243814  471804 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/proxy-client.key
	I1025 10:59:45.243838  471804 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/proxy-client.crt with IP's: []
	I1025 10:59:45.409870  471804 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/proxy-client.crt ...
	I1025 10:59:45.409911  471804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/proxy-client.crt: {Name:mk8a7c0eb7aaf9e771bc256829f0e53ad7cafb6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:45.410144  471804 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/proxy-client.key ...
	I1025 10:59:45.410166  471804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/proxy-client.key: {Name:mk8cc6dbaf10265fb28100f3ba5e820344cb5629 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:59:45.410426  471804 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem (1338 bytes)
	W1025 10:59:45.410479  471804 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256_empty.pem, impossibly tiny 0 bytes
	I1025 10:59:45.410493  471804 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 10:59:45.410576  471804 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/ca.pem (1078 bytes)
	I1025 10:59:45.410614  471804 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:59:45.410645  471804 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/certs/key.pem (1675 bytes)
	I1025 10:59:45.410727  471804 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem (1708 bytes)
	I1025 10:59:45.411379  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:59:45.439221  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:59:45.469081  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:59:45.494150  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 10:59:45.516419  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1025 10:59:45.536697  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:59:45.556632  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:59:45.578755  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:59:45.598649  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/ssl/certs/2612562.pem --> /usr/share/ca-certificates/2612562.pem (1708 bytes)
	I1025 10:59:45.617537  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:59:45.636243  471804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-259409/.minikube/certs/261256.pem --> /usr/share/ca-certificates/261256.pem (1338 bytes)
	I1025 10:59:45.655850  471804 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:59:45.669094  471804 ssh_runner.go:195] Run: openssl version
	I1025 10:59:45.676921  471804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/261256.pem && ln -fs /usr/share/ca-certificates/261256.pem /etc/ssl/certs/261256.pem"
	I1025 10:59:45.686023  471804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/261256.pem
	I1025 10:59:45.690127  471804 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:53 /usr/share/ca-certificates/261256.pem
	I1025 10:59:45.690261  471804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/261256.pem
	I1025 10:59:45.732415  471804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/261256.pem /etc/ssl/certs/51391683.0"
	I1025 10:59:45.741184  471804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2612562.pem && ln -fs /usr/share/ca-certificates/2612562.pem /etc/ssl/certs/2612562.pem"
	I1025 10:59:45.749837  471804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2612562.pem
	I1025 10:59:45.753790  471804 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:53 /usr/share/ca-certificates/2612562.pem
	I1025 10:59:45.753905  471804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2612562.pem
	I1025 10:59:45.797196  471804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2612562.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:59:45.806171  471804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:59:45.815152  471804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:59:45.819393  471804 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:59:45.819467  471804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:59:45.862149  471804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:59:45.871520  471804 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:59:45.875419  471804 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 10:59:45.875518  471804 kubeadm.go:400] StartCluster: {Name:auto-759329 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-759329 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:59:45.875609  471804 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:59:45.875677  471804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:59:45.916979  471804 cri.go:89] found id: ""
	I1025 10:59:45.917068  471804 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:59:45.930835  471804 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:59:45.939638  471804 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 10:59:45.939705  471804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:59:45.950633  471804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 10:59:45.950656  471804 kubeadm.go:157] found existing configuration files:
	
	I1025 10:59:45.950719  471804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 10:59:45.962928  471804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 10:59:45.963024  471804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:59:45.971078  471804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 10:59:45.979315  471804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 10:59:45.979405  471804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:59:45.987366  471804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 10:59:45.995603  471804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 10:59:45.995671  471804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:59:46.007338  471804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 10:59:46.017036  471804 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 10:59:46.017166  471804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:59:46.036609  471804 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 10:59:46.093914  471804 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 10:59:46.094055  471804 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:59:46.131648  471804 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 10:59:46.131808  471804 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 10:59:46.131881  471804 kubeadm.go:318] OS: Linux
	I1025 10:59:46.131955  471804 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 10:59:46.132036  471804 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 10:59:46.132121  471804 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 10:59:46.132199  471804 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 10:59:46.132282  471804 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 10:59:46.132358  471804 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 10:59:46.132437  471804 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 10:59:46.132516  471804 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 10:59:46.132598  471804 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 10:59:46.228002  471804 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:59:46.228151  471804 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:59:46.228261  471804 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 10:59:46.235949  471804 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1025 10:59:44.578546  467402 pod_ready.go:104] pod "coredns-66bc5c9577-c56mp" is not "Ready", error: <nil>
	W1025 10:59:46.580788  467402 pod_ready.go:104] pod "coredns-66bc5c9577-c56mp" is not "Ready", error: <nil>
	I1025 10:59:46.242699  471804 out.go:252]   - Generating certificates and keys ...
	I1025 10:59:46.242886  471804 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 10:59:46.242972  471804 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 10:59:46.719270  471804 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 10:59:47.317663  471804 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 10:59:48.018256  471804 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 10:59:48.090859  471804 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 10:59:48.662525  471804 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:59:48.662890  471804 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-759329 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	W1025 10:59:48.582733  467402 pod_ready.go:104] pod "coredns-66bc5c9577-c56mp" is not "Ready", error: <nil>
	W1025 10:59:51.079883  467402 pod_ready.go:104] pod "coredns-66bc5c9577-c56mp" is not "Ready", error: <nil>
	I1025 10:59:52.085562  467402 pod_ready.go:94] pod "coredns-66bc5c9577-c56mp" is "Ready"
	I1025 10:59:52.085594  467402 pod_ready.go:86] duration metric: took 32.013390888s for pod "coredns-66bc5c9577-c56mp" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:59:52.088484  467402 pod_ready.go:83] waiting for pod "etcd-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:59:52.093125  467402 pod_ready.go:94] pod "etcd-no-preload-093313" is "Ready"
	I1025 10:59:52.093154  467402 pod_ready.go:86] duration metric: took 4.646663ms for pod "etcd-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:59:52.095578  467402 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:59:52.100114  467402 pod_ready.go:94] pod "kube-apiserver-no-preload-093313" is "Ready"
	I1025 10:59:52.100138  467402 pod_ready.go:86] duration metric: took 4.527294ms for pod "kube-apiserver-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:59:52.102614  467402 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:59:52.276623  467402 pod_ready.go:94] pod "kube-controller-manager-no-preload-093313" is "Ready"
	I1025 10:59:52.276657  467402 pod_ready.go:86] duration metric: took 174.006426ms for pod "kube-controller-manager-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:59:52.477129  467402 pod_ready.go:83] waiting for pod "kube-proxy-vlb79" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:59:52.877080  467402 pod_ready.go:94] pod "kube-proxy-vlb79" is "Ready"
	I1025 10:59:52.877113  467402 pod_ready.go:86] duration metric: took 399.952016ms for pod "kube-proxy-vlb79" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:59:53.076468  467402 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:59:53.476980  467402 pod_ready.go:94] pod "kube-scheduler-no-preload-093313" is "Ready"
	I1025 10:59:53.477019  467402 pod_ready.go:86] duration metric: took 400.521359ms for pod "kube-scheduler-no-preload-093313" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:59:53.477033  467402 pod_ready.go:40] duration metric: took 33.408544557s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:59:53.563381  467402 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:59:53.566681  467402 out.go:179] * Done! kubectl is now configured to use "no-preload-093313" cluster and "default" namespace by default
	I1025 10:59:49.253932  471804 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:59:49.254342  471804 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-759329 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 10:59:49.648303  471804 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:59:50.160750  471804 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 10:59:51.249665  471804 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 10:59:51.250151  471804 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 10:59:51.376997  471804 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 10:59:51.943528  471804 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 10:59:52.970078  471804 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 10:59:53.345480  471804 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 10:59:53.909105  471804 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 10:59:53.910618  471804 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 10:59:53.919580  471804 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 10:59:53.928504  471804 out.go:252]   - Booting up control plane ...
	I1025 10:59:53.928614  471804 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 10:59:53.928697  471804 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 10:59:53.929933  471804 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 10:59:53.953115  471804 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 10:59:53.953222  471804 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 10:59:53.962334  471804 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 10:59:53.962440  471804 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 10:59:53.962482  471804 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 10:59:54.148660  471804 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 10:59:54.148779  471804 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 10:59:55.649950  471804 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501381772s
	I1025 10:59:55.653526  471804 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 10:59:55.653639  471804 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1025 10:59:55.654252  471804 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 10:59:55.654356  471804 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 10:59:57.840417  471804 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.186426196s
	I1025 11:00:00.181827  471804 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.528207325s
	I1025 11:00:03.655448  471804 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 8.001791972s
	I1025 11:00:03.679707  471804 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 11:00:03.696439  471804 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 11:00:03.711918  471804 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 11:00:03.712147  471804 kubeadm.go:318] [mark-control-plane] Marking the node auto-759329 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 11:00:03.725160  471804 kubeadm.go:318] [bootstrap-token] Using token: 8gak69.s70srqn4njacqoj2
	I1025 11:00:03.728172  471804 out.go:252]   - Configuring RBAC rules ...
	I1025 11:00:03.728329  471804 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 11:00:03.733748  471804 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 11:00:03.747632  471804 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 11:00:03.754364  471804 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 11:00:03.758714  471804 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 11:00:03.763035  471804 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 11:00:04.063017  471804 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 11:00:04.540357  471804 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 11:00:05.063269  471804 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 11:00:05.064654  471804 kubeadm.go:318] 
	I1025 11:00:05.064737  471804 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 11:00:05.064750  471804 kubeadm.go:318] 
	I1025 11:00:05.064844  471804 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 11:00:05.064856  471804 kubeadm.go:318] 
	I1025 11:00:05.064883  471804 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 11:00:05.064951  471804 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 11:00:05.065010  471804 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 11:00:05.065019  471804 kubeadm.go:318] 
	I1025 11:00:05.065076  471804 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 11:00:05.065084  471804 kubeadm.go:318] 
	I1025 11:00:05.065134  471804 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 11:00:05.065142  471804 kubeadm.go:318] 
	I1025 11:00:05.065197  471804 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 11:00:05.065285  471804 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 11:00:05.065362  471804 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 11:00:05.065372  471804 kubeadm.go:318] 
	I1025 11:00:05.065460  471804 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 11:00:05.065554  471804 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 11:00:05.065564  471804 kubeadm.go:318] 
	I1025 11:00:05.065660  471804 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 8gak69.s70srqn4njacqoj2 \
	I1025 11:00:05.065773  471804 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:537c864aeffbd98a53a718d258f7bd1f28aa4a0d15716971a35abdfdaaeb28d5 \
	I1025 11:00:05.065799  471804 kubeadm.go:318] 	--control-plane 
	I1025 11:00:05.065808  471804 kubeadm.go:318] 
	I1025 11:00:05.065897  471804 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 11:00:05.065906  471804 kubeadm.go:318] 
	I1025 11:00:05.066017  471804 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 8gak69.s70srqn4njacqoj2 \
	I1025 11:00:05.066129  471804 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:537c864aeffbd98a53a718d258f7bd1f28aa4a0d15716971a35abdfdaaeb28d5 
	I1025 11:00:05.069736  471804 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1025 11:00:05.070011  471804 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 11:00:05.070126  471804 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 11:00:05.070151  471804 cni.go:84] Creating CNI manager for ""
	I1025 11:00:05.070163  471804 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 11:00:05.073585  471804 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 11:00:05.076615  471804 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 11:00:05.081421  471804 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 11:00:05.081448  471804 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 11:00:05.098550  471804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 11:00:05.631060  471804 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 11:00:05.631201  471804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 11:00:05.631270  471804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-759329 minikube.k8s.io/updated_at=2025_10_25T11_00_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689 minikube.k8s.io/name=auto-759329 minikube.k8s.io/primary=true
	I1025 11:00:05.663594  471804 ops.go:34] apiserver oom_adj: -16
	I1025 11:00:05.874182  471804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 11:00:06.374412  471804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 11:00:06.875201  471804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 11:00:07.375115  471804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 11:00:07.874611  471804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 11:00:08.375187  471804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 11:00:08.875064  471804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 11:00:09.374271  471804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 11:00:09.553503  471804 kubeadm.go:1113] duration metric: took 3.922351958s to wait for elevateKubeSystemPrivileges
	I1025 11:00:09.553532  471804 kubeadm.go:402] duration metric: took 23.678016982s to StartCluster
	I1025 11:00:09.553549  471804 settings.go:142] acquiring lock: {Name:mk78071aea90144fb2e8f63b90de4792d7a3522a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 11:00:09.553617  471804 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 11:00:09.554618  471804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/kubeconfig: {Name:mk3cf90e53647b03c322b28ae8a2a204e18f4e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 11:00:09.554863  471804 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 11:00:09.554958  471804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 11:00:09.555223  471804 config.go:182] Loaded profile config "auto-759329": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 11:00:09.555267  471804 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 11:00:09.555327  471804 addons.go:69] Setting storage-provisioner=true in profile "auto-759329"
	I1025 11:00:09.555349  471804 addons.go:238] Setting addon storage-provisioner=true in "auto-759329"
	I1025 11:00:09.555369  471804 host.go:66] Checking if "auto-759329" exists ...
	I1025 11:00:09.556067  471804 cli_runner.go:164] Run: docker container inspect auto-759329 --format={{.State.Status}}
	I1025 11:00:09.556426  471804 addons.go:69] Setting default-storageclass=true in profile "auto-759329"
	I1025 11:00:09.556449  471804 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-759329"
	I1025 11:00:09.556724  471804 cli_runner.go:164] Run: docker container inspect auto-759329 --format={{.State.Status}}
	I1025 11:00:09.560918  471804 out.go:179] * Verifying Kubernetes components...
	I1025 11:00:09.568515  471804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 11:00:09.607117  471804 addons.go:238] Setting addon default-storageclass=true in "auto-759329"
	I1025 11:00:09.607160  471804 host.go:66] Checking if "auto-759329" exists ...
	I1025 11:00:09.607580  471804 cli_runner.go:164] Run: docker container inspect auto-759329 --format={{.State.Status}}
	I1025 11:00:09.609870  471804 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 11:00:09.613272  471804 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 11:00:09.613296  471804 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 11:00:09.613363  471804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-759329
	I1025 11:00:09.648475  471804 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 11:00:09.648497  471804 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 11:00:09.648564  471804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-759329
	I1025 11:00:09.674132  471804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/auto-759329/id_rsa Username:docker}
	I1025 11:00:09.684372  471804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/auto-759329/id_rsa Username:docker}
	I1025 11:00:10.007482  471804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 11:00:10.044910  471804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 11:00:10.079633  471804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 11:00:10.156672  471804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 11:00:10.983437  471804 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1025 11:00:11.493853  471804 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-759329" context rescaled to 1 replicas
	I1025 11:00:11.502607  471804 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.457609955s)
	I1025 11:00:11.502654  471804 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.422948008s)
	I1025 11:00:11.502851  471804 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.34610741s)
	I1025 11:00:11.503662  471804 node_ready.go:35] waiting up to 15m0s for node "auto-759329" to be "Ready" ...
	I1025 11:00:11.530852  471804 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Oct 25 10:59:54 no-preload-093313 crio[650]: time="2025-10-25T10:59:54.371950358Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a9a693d6-3013-478e-b7b6-417bbef272ad name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:59:54 no-preload-093313 crio[650]: time="2025-10-25T10:59:54.380421999Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e37cff44-5f8d-4e4b-8928-e13de79b3365 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:59:54 no-preload-093313 crio[650]: time="2025-10-25T10:59:54.385807926Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2vjq/dashboard-metrics-scraper" id=6c05ae64-8223-4f64-9f03-34f7210f9b02 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:59:54 no-preload-093313 crio[650]: time="2025-10-25T10:59:54.387130572Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:59:54 no-preload-093313 crio[650]: time="2025-10-25T10:59:54.400445505Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:59:54 no-preload-093313 crio[650]: time="2025-10-25T10:59:54.401254563Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:59:54 no-preload-093313 crio[650]: time="2025-10-25T10:59:54.421615106Z" level=info msg="Created container f1c50364f80a6c172e9ab40c700aecaed5832969281e3fafcc082a07d4f2ff68: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2vjq/dashboard-metrics-scraper" id=6c05ae64-8223-4f64-9f03-34f7210f9b02 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:59:54 no-preload-093313 crio[650]: time="2025-10-25T10:59:54.449171087Z" level=info msg="Starting container: f1c50364f80a6c172e9ab40c700aecaed5832969281e3fafcc082a07d4f2ff68" id=5218cfe0-18aa-4c28-bb64-a44c0d472706 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:59:54 no-preload-093313 crio[650]: time="2025-10-25T10:59:54.466140814Z" level=info msg="Started container" PID=1661 containerID=f1c50364f80a6c172e9ab40c700aecaed5832969281e3fafcc082a07d4f2ff68 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2vjq/dashboard-metrics-scraper id=5218cfe0-18aa-4c28-bb64-a44c0d472706 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5b781afab0162e41cce030bf1abee96ba76c927cb7901aaf10aa1e9874a84755
	Oct 25 10:59:54 no-preload-093313 conmon[1659]: conmon f1c50364f80a6c172e9a <ninfo>: container 1661 exited with status 1
	Oct 25 10:59:54 no-preload-093313 crio[650]: time="2025-10-25T10:59:54.716305141Z" level=info msg="Removing container: 640a4b158073a480b8c6774c890e4f73832120b1cc4d522f7bd22808d528816e" id=983f9d2a-e492-4168-b0b8-812b83d168c1 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:59:54 no-preload-093313 crio[650]: time="2025-10-25T10:59:54.729875437Z" level=info msg="Error loading conmon cgroup of container 640a4b158073a480b8c6774c890e4f73832120b1cc4d522f7bd22808d528816e: cgroup deleted" id=983f9d2a-e492-4168-b0b8-812b83d168c1 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:59:54 no-preload-093313 crio[650]: time="2025-10-25T10:59:54.738847522Z" level=info msg="Removed container 640a4b158073a480b8c6774c890e4f73832120b1cc4d522f7bd22808d528816e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2vjq/dashboard-metrics-scraper" id=983f9d2a-e492-4168-b0b8-812b83d168c1 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 11:00:00 no-preload-093313 crio[650]: time="2025-10-25T11:00:00.224553088Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 11:00:00 no-preload-093313 crio[650]: time="2025-10-25T11:00:00.232422893Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 11:00:00 no-preload-093313 crio[650]: time="2025-10-25T11:00:00.232625948Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 11:00:00 no-preload-093313 crio[650]: time="2025-10-25T11:00:00.232704816Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 11:00:00 no-preload-093313 crio[650]: time="2025-10-25T11:00:00.238528967Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 11:00:00 no-preload-093313 crio[650]: time="2025-10-25T11:00:00.238723315Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 11:00:00 no-preload-093313 crio[650]: time="2025-10-25T11:00:00.238819644Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 11:00:00 no-preload-093313 crio[650]: time="2025-10-25T11:00:00.2467834Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 11:00:00 no-preload-093313 crio[650]: time="2025-10-25T11:00:00.247003258Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 11:00:00 no-preload-093313 crio[650]: time="2025-10-25T11:00:00.247088461Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 11:00:00 no-preload-093313 crio[650]: time="2025-10-25T11:00:00.25462027Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 11:00:00 no-preload-093313 crio[650]: time="2025-10-25T11:00:00.254771607Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f1c50364f80a6       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago       Exited              dashboard-metrics-scraper   2                   5b781afab0162       dashboard-metrics-scraper-6ffb444bf9-k2vjq   kubernetes-dashboard
	35125532fae1d       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           22 seconds ago       Running             storage-provisioner         2                   7bc22231e165c       storage-provisioner                          kube-system
	9f492d4ffcca1       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago       Running             kubernetes-dashboard        0                   accfb2225db41       kubernetes-dashboard-855c9754f9-xrszz        kubernetes-dashboard
	01259e3c29c7a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago       Running             coredns                     1                   46c8f3de3f7c3       coredns-66bc5c9577-c56mp                     kube-system
	58f9df9753fe7       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago       Running             busybox                     1                   e239afa568164       busybox                                      default
	3fba2d7bed036       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago       Running             kindnet-cni                 1                   7fb1b394f9e78       kindnet-6tbtt                                kube-system
	16d8aea6ffa68       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           53 seconds ago       Running             kube-proxy                  1                   a756e7bf8e7fc       kube-proxy-vlb79                             kube-system
	f3cd0358e70fd       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           53 seconds ago       Exited              storage-provisioner         1                   7bc22231e165c       storage-provisioner                          kube-system
	555a214631009       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   e4dbc9e42e264       etcd-no-preload-093313                       kube-system
	3dd46cc93a4d3       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   6db5691194bcf       kube-controller-manager-no-preload-093313    kube-system
	1abb0086bfd53       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   55bece34c1abb       kube-scheduler-no-preload-093313             kube-system
	2c3118fc8aba3       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   c9783f23fe6e4       kube-apiserver-no-preload-093313             kube-system
	
	
	==> coredns [01259e3c29c7a48a1bfeb65d5897aef0275e5a400f418985c64b8bf48d14a17b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53649 - 43618 "HINFO IN 4211319309414066805.1736044630763128359. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016766444s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-093313
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-093313
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=no-preload-093313
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_58_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:58:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-093313
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:59:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:59:38 +0000   Sat, 25 Oct 2025 10:58:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:59:38 +0000   Sat, 25 Oct 2025 10:58:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:59:38 +0000   Sat, 25 Oct 2025 10:58:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:59:38 +0000   Sat, 25 Oct 2025 10:58:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-093313
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                03f9066b-feaa-4e69-be40-1b2314524518
	  Boot ID:                    89aa6167-e700-462e-8969-7739a9f33dc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 coredns-66bc5c9577-c56mp                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     119s
	  kube-system                 etcd-no-preload-093313                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m4s
	  kube-system                 kindnet-6tbtt                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      119s
	  kube-system                 kube-apiserver-no-preload-093313              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-no-preload-093313     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-vlb79                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-scheduler-no-preload-093313              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-k2vjq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xrszz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 117s                   kube-proxy       
	  Normal   Starting                 52s                    kube-proxy       
	  Warning  CgroupV1                 2m14s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m14s (x8 over 2m14s)  kubelet          Node no-preload-093313 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m14s (x8 over 2m14s)  kubelet          Node no-preload-093313 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m14s (x8 over 2m14s)  kubelet          Node no-preload-093313 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m4s                   kubelet          Node no-preload-093313 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m4s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m4s                   kubelet          Node no-preload-093313 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m4s                   kubelet          Node no-preload-093313 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m4s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m                     node-controller  Node no-preload-093313 event: Registered Node no-preload-093313 in Controller
	  Normal   NodeReady                103s                   kubelet          Node no-preload-093313 status is now: NodeReady
	  Normal   Starting                 67s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 67s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x8 over 67s)      kubelet          Node no-preload-093313 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 67s)      kubelet          Node no-preload-093313 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 67s)      kubelet          Node no-preload-093313 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                    node-controller  Node no-preload-093313 event: Registered Node no-preload-093313 in Controller
	
	
	==> dmesg <==
	[Oct25 10:37] overlayfs: idmapped layers are currently not supported
	[ +24.992751] overlayfs: idmapped layers are currently not supported
	[Oct25 10:38] overlayfs: idmapped layers are currently not supported
	[Oct25 10:39] overlayfs: idmapped layers are currently not supported
	[Oct25 10:40] overlayfs: idmapped layers are currently not supported
	[Oct25 10:41] overlayfs: idmapped layers are currently not supported
	[Oct25 10:43] overlayfs: idmapped layers are currently not supported
	[Oct25 10:45] overlayfs: idmapped layers are currently not supported
	[ +32.163680] overlayfs: idmapped layers are currently not supported
	[Oct25 10:47] overlayfs: idmapped layers are currently not supported
	[Oct25 10:49] overlayfs: idmapped layers are currently not supported
	[Oct25 10:50] overlayfs: idmapped layers are currently not supported
	[Oct25 10:51] overlayfs: idmapped layers are currently not supported
	[  +5.236078] overlayfs: idmapped layers are currently not supported
	[Oct25 10:52] overlayfs: idmapped layers are currently not supported
	[Oct25 10:53] overlayfs: idmapped layers are currently not supported
	[Oct25 10:54] overlayfs: idmapped layers are currently not supported
	[Oct25 10:55] overlayfs: idmapped layers are currently not supported
	[Oct25 10:56] overlayfs: idmapped layers are currently not supported
	[ +41.501413] overlayfs: idmapped layers are currently not supported
	[Oct25 10:57] overlayfs: idmapped layers are currently not supported
	[Oct25 10:58] overlayfs: idmapped layers are currently not supported
	[Oct25 10:59] overlayfs: idmapped layers are currently not supported
	[  +1.429017] overlayfs: idmapped layers are currently not supported
	[ +48.923730] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [555a214631009b8c9e0ad146cf6605f03eec6b67635b74eb9d3950940eecf3f5] <==
	{"level":"warn","ts":"2025-10-25T10:59:14.016172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.066280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.116189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.156027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.194577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.233842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.286705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.315858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.337876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.364244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.393182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.431010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.464404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.498637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.551706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.561689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.593090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.677820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.720051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.762826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.786806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.832038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.838030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:14.871874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:59:15.060560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39764","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:00:13 up  2:42,  0 user,  load average: 4.69, 3.92, 3.19
	Linux no-preload-093313 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3fba2d7bed036af45d7420433508986b6404a9b71238141c3e885894291bec15] <==
	I1025 10:59:20.015030       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:59:20.015275       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:59:20.015433       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:59:20.015446       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:59:20.015458       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:59:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:59:20.224280       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:59:20.238155       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:59:20.238188       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:59:20.238292       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:59:50.225218       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 10:59:50.230618       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 10:59:50.230740       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 10:59:50.230838       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1025 10:59:51.138454       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:59:51.138514       1 metrics.go:72] Registering metrics
	I1025 10:59:51.138594       1 controller.go:711] "Syncing nftables rules"
	I1025 11:00:00.224113       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 11:00:00.224219       1 main.go:301] handling current node
	I1025 11:00:10.227239       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 11:00:10.227279       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2c3118fc8aba39e254ed98a90027a52eb3bc4eb55ca37aed37f0638d414d5a7c] <==
	I1025 10:59:17.337890       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:59:17.466793       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 10:59:17.378532       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 10:59:17.466937       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:59:17.493704       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 10:59:17.494170       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:59:17.503068       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 10:59:17.503148       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 10:59:17.503328       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 10:59:17.378169       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 10:59:17.504911       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:59:17.505161       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 10:59:17.547388       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1025 10:59:17.590904       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 10:59:17.832780       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:59:19.065800       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:59:19.262000       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:59:19.408503       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:59:19.489876       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:59:19.513074       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:59:19.660092       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.145.4"}
	I1025 10:59:19.696092       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.187.201"}
	I1025 10:59:21.547934       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:59:21.956068       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:59:22.009468       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3dd46cc93a4d340b21d6515927392c6d678062f1fd4a8eb33513a013a750df3f] <==
	I1025 10:59:21.549479       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 10:59:21.561255       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:59:21.562517       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 10:59:21.578217       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:59:21.578298       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:59:21.578329       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:59:21.579034       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:59:21.579163       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 10:59:21.579314       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:59:21.582506       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:59:21.582619       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:59:21.589477       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 10:59:21.595265       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:59:21.598631       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:59:21.598753       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 10:59:21.598917       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 10:59:21.599142       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 10:59:21.600517       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 10:59:21.601107       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 10:59:21.602501       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 10:59:21.602644       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 10:59:21.603854       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 10:59:21.606299       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 10:59:21.609614       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 10:59:21.614315       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	
	
	==> kube-proxy [16d8aea6ffa68886212ab4f1d30e95a58ec710ae4e0e9bad811855e16be7b0b8] <==
	I1025 10:59:20.035063       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:59:20.120442       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:59:20.222606       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:59:20.222647       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 10:59:20.222731       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:59:20.309706       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:59:20.309816       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:59:20.328899       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:59:20.330423       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:59:20.330953       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:59:20.332263       1 config.go:200] "Starting service config controller"
	I1025 10:59:20.332321       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:59:20.332361       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:59:20.332389       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:59:20.332425       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:59:20.332451       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:59:20.335476       1 config.go:309] "Starting node config controller"
	I1025 10:59:20.336627       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:59:20.336708       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:59:20.432998       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:59:20.433100       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:59:20.433126       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1abb0086bfd53a0a24fd6a972d03dfa536774e2a3214e984b2913d5d42eb1584] <==
	I1025 10:59:12.212899       1 serving.go:386] Generated self-signed cert in-memory
	W1025 10:59:17.402411       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 10:59:17.402470       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 10:59:17.402489       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 10:59:17.402498       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 10:59:17.640279       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:59:17.684227       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:59:17.692796       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:59:17.693020       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:59:17.731125       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:59:17.693038       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:59:18.034243       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:59:22 no-preload-093313 kubelet[772]: I1025 10:59:22.295292     772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtsww\" (UniqueName: \"kubernetes.io/projected/609a0c23-fcd6-4966-b4dd-6411fdf189f7-kube-api-access-qtsww\") pod \"kubernetes-dashboard-855c9754f9-xrszz\" (UID: \"609a0c23-fcd6-4966-b4dd-6411fdf189f7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xrszz"
	Oct 25 10:59:22 no-preload-093313 kubelet[772]: I1025 10:59:22.295380     772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/386db8b1-28c7-49b0-b999-71145f94a1f7-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-k2vjq\" (UID: \"386db8b1-28c7-49b0-b999-71145f94a1f7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2vjq"
	Oct 25 10:59:22 no-preload-093313 kubelet[772]: I1025 10:59:22.295406     772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/609a0c23-fcd6-4966-b4dd-6411fdf189f7-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-xrszz\" (UID: \"609a0c23-fcd6-4966-b4dd-6411fdf189f7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xrszz"
	Oct 25 10:59:22 no-preload-093313 kubelet[772]: I1025 10:59:22.295535     772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zf57\" (UniqueName: \"kubernetes.io/projected/386db8b1-28c7-49b0-b999-71145f94a1f7-kube-api-access-5zf57\") pod \"dashboard-metrics-scraper-6ffb444bf9-k2vjq\" (UID: \"386db8b1-28c7-49b0-b999-71145f94a1f7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2vjq"
	Oct 25 10:59:23 no-preload-093313 kubelet[772]: W1025 10:59:23.189490     772 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b/crio-accfb2225db4177b91eb1b3662d654d516331950a39442dbb0a1f710b17e7d4b WatchSource:0}: Error finding container accfb2225db4177b91eb1b3662d654d516331950a39442dbb0a1f710b17e7d4b: Status 404 returned error can't find the container with id accfb2225db4177b91eb1b3662d654d516331950a39442dbb0a1f710b17e7d4b
	Oct 25 10:59:23 no-preload-093313 kubelet[772]: W1025 10:59:23.211855     772 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6e8e2d881e7d2faee64c30d1e9667e71063dfe4d8854150602f38ca0b602873b/crio-5b781afab0162e41cce030bf1abee96ba76c927cb7901aaf10aa1e9874a84755 WatchSource:0}: Error finding container 5b781afab0162e41cce030bf1abee96ba76c927cb7901aaf10aa1e9874a84755: Status 404 returned error can't find the container with id 5b781afab0162e41cce030bf1abee96ba76c927cb7901aaf10aa1e9874a84755
	Oct 25 10:59:31 no-preload-093313 kubelet[772]: I1025 10:59:31.665969     772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xrszz" podStartSLOduration=2.163621599 podStartE2EDuration="9.665949923s" podCreationTimestamp="2025-10-25 10:59:22 +0000 UTC" firstStartedPulling="2025-10-25 10:59:23.19504164 +0000 UTC m=+17.448288298" lastFinishedPulling="2025-10-25 10:59:30.697369956 +0000 UTC m=+24.950616622" observedRunningTime="2025-10-25 10:59:31.665868372 +0000 UTC m=+25.919115079" watchObservedRunningTime="2025-10-25 10:59:31.665949923 +0000 UTC m=+25.919196581"
	Oct 25 10:59:36 no-preload-093313 kubelet[772]: I1025 10:59:36.648991     772 scope.go:117] "RemoveContainer" containerID="562baf6bb10a6069b8d5062293b9b4d7b207ce3e429c33e9aaeb2a3773e6c336"
	Oct 25 10:59:37 no-preload-093313 kubelet[772]: I1025 10:59:37.654528     772 scope.go:117] "RemoveContainer" containerID="562baf6bb10a6069b8d5062293b9b4d7b207ce3e429c33e9aaeb2a3773e6c336"
	Oct 25 10:59:37 no-preload-093313 kubelet[772]: I1025 10:59:37.654824     772 scope.go:117] "RemoveContainer" containerID="640a4b158073a480b8c6774c890e4f73832120b1cc4d522f7bd22808d528816e"
	Oct 25 10:59:37 no-preload-093313 kubelet[772]: E1025 10:59:37.654964     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k2vjq_kubernetes-dashboard(386db8b1-28c7-49b0-b999-71145f94a1f7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2vjq" podUID="386db8b1-28c7-49b0-b999-71145f94a1f7"
	Oct 25 10:59:38 no-preload-093313 kubelet[772]: I1025 10:59:38.658722     772 scope.go:117] "RemoveContainer" containerID="640a4b158073a480b8c6774c890e4f73832120b1cc4d522f7bd22808d528816e"
	Oct 25 10:59:38 no-preload-093313 kubelet[772]: E1025 10:59:38.658901     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k2vjq_kubernetes-dashboard(386db8b1-28c7-49b0-b999-71145f94a1f7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2vjq" podUID="386db8b1-28c7-49b0-b999-71145f94a1f7"
	Oct 25 10:59:43 no-preload-093313 kubelet[772]: I1025 10:59:43.140038     772 scope.go:117] "RemoveContainer" containerID="640a4b158073a480b8c6774c890e4f73832120b1cc4d522f7bd22808d528816e"
	Oct 25 10:59:43 no-preload-093313 kubelet[772]: E1025 10:59:43.140613     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k2vjq_kubernetes-dashboard(386db8b1-28c7-49b0-b999-71145f94a1f7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2vjq" podUID="386db8b1-28c7-49b0-b999-71145f94a1f7"
	Oct 25 10:59:50 no-preload-093313 kubelet[772]: I1025 10:59:50.690701     772 scope.go:117] "RemoveContainer" containerID="f3cd0358e70fd7964d67639fe6cd07db37eb01990ca5aa0384c07de252a3dd21"
	Oct 25 10:59:54 no-preload-093313 kubelet[772]: I1025 10:59:54.370206     772 scope.go:117] "RemoveContainer" containerID="640a4b158073a480b8c6774c890e4f73832120b1cc4d522f7bd22808d528816e"
	Oct 25 10:59:54 no-preload-093313 kubelet[772]: I1025 10:59:54.705286     772 scope.go:117] "RemoveContainer" containerID="640a4b158073a480b8c6774c890e4f73832120b1cc4d522f7bd22808d528816e"
	Oct 25 10:59:54 no-preload-093313 kubelet[772]: I1025 10:59:54.705759     772 scope.go:117] "RemoveContainer" containerID="f1c50364f80a6c172e9ab40c700aecaed5832969281e3fafcc082a07d4f2ff68"
	Oct 25 10:59:54 no-preload-093313 kubelet[772]: E1025 10:59:54.706060     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k2vjq_kubernetes-dashboard(386db8b1-28c7-49b0-b999-71145f94a1f7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2vjq" podUID="386db8b1-28c7-49b0-b999-71145f94a1f7"
	Oct 25 11:00:03 no-preload-093313 kubelet[772]: I1025 11:00:03.139545     772 scope.go:117] "RemoveContainer" containerID="f1c50364f80a6c172e9ab40c700aecaed5832969281e3fafcc082a07d4f2ff68"
	Oct 25 11:00:03 no-preload-093313 kubelet[772]: E1025 11:00:03.140510     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k2vjq_kubernetes-dashboard(386db8b1-28c7-49b0-b999-71145f94a1f7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k2vjq" podUID="386db8b1-28c7-49b0-b999-71145f94a1f7"
	Oct 25 11:00:07 no-preload-093313 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 11:00:07 no-preload-093313 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 11:00:07 no-preload-093313 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [9f492d4ffcca1be504956535aafe86832a25e88e3dd3bb6655bcd469185729d8] <==
	2025/10/25 10:59:30 Using namespace: kubernetes-dashboard
	2025/10/25 10:59:30 Using in-cluster config to connect to apiserver
	2025/10/25 10:59:30 Using secret token for csrf signing
	2025/10/25 10:59:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 10:59:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 10:59:30 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 10:59:30 Generating JWE encryption key
	2025/10/25 10:59:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 10:59:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 10:59:31 Initializing JWE encryption key from synchronized object
	2025/10/25 10:59:31 Creating in-cluster Sidecar client
	2025/10/25 10:59:31 Serving insecurely on HTTP port: 9090
	2025/10/25 10:59:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 11:00:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:59:30 Starting overwatch
	
	
	==> storage-provisioner [35125532fae1d125369f6e7bbbd7c735a67cc2fa39af4d0a1b7697175a3ea7bf] <==
	I1025 10:59:50.787885       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:59:50.803904       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:59:50.804033       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 10:59:50.808403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:59:54.264607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:59:58.525355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 11:00:02.137484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 11:00:05.197255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 11:00:08.220465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 11:00:08.232821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 11:00:08.233953       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 11:00:08.245520       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-093313_cbf79b3a-9c46-4871-bea1-08477c52ed67!
	I1025 11:00:08.245467       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"77448026-dc1c-4f90-a3be-98f6a3fbe47d", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-093313_cbf79b3a-9c46-4871-bea1-08477c52ed67 became leader
	W1025 11:00:08.246284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 11:00:08.253924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 11:00:08.354367       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-093313_cbf79b3a-9c46-4871-bea1-08477c52ed67!
	W1025 11:00:10.262601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 11:00:10.278625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 11:00:12.282891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 11:00:12.291108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f3cd0358e70fd7964d67639fe6cd07db37eb01990ca5aa0384c07de252a3dd21] <==
	I1025 10:59:19.784973       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 10:59:49.786719       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-093313 -n no-preload-093313
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-093313 -n no-preload-093313: exit status 2 (391.766947ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-093313 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.54s)
E1025 11:05:55.391194  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:05:55.397578  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:05:55.408965  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:05:55.430399  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:05:55.471822  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:05:55.553199  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:05:55.714734  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:05:56.036438  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:05:56.678705  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:05:57.960027  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:06:00.521894  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:06:05.643932  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:06:15.126746  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/default-k8s-diff-port-223394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:06:15.886375  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:06:17.631174  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:06:20.951259  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:06:36.367830  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/auto-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:06:36.949367  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/kindnet-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:06:36.955882  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/kindnet-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:06:36.967367  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/kindnet-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:06:36.988799  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/kindnet-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:06:37.030315  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/kindnet-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:06:37.112637  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/kindnet-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:06:37.274124  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/kindnet-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:06:37.595989  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/kindnet-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:06:38.238420  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/kindnet-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:06:39.519856  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/kindnet-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:06:42.085640  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/kindnet-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:06:47.207258  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/kindnet-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (257/326)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.36
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 5.79
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.16
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 182.7
31 TestAddons/serial/GCPAuth/Namespaces 0.2
32 TestAddons/serial/GCPAuth/FakeCredentials 9.81
48 TestAddons/StoppedEnableDisable 12.48
49 TestCertOptions 45.79
50 TestCertExpiration 259.87
52 TestForceSystemdFlag 36.6
53 TestForceSystemdEnv 47.69
58 TestErrorSpam/setup 33.81
59 TestErrorSpam/start 0.82
60 TestErrorSpam/status 1.15
61 TestErrorSpam/pause 5.88
62 TestErrorSpam/unpause 5.13
63 TestErrorSpam/stop 1.53
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 83.13
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 26.24
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.43
75 TestFunctional/serial/CacheCmd/cache/add_local 1.1
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.84
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
83 TestFunctional/serial/ExtraConfig 32.95
84 TestFunctional/serial/ComponentHealth 0.12
85 TestFunctional/serial/LogsCmd 1.48
86 TestFunctional/serial/LogsFileCmd 1.47
87 TestFunctional/serial/InvalidService 4
89 TestFunctional/parallel/ConfigCmd 0.49
90 TestFunctional/parallel/DashboardCmd 14.32
91 TestFunctional/parallel/DryRun 0.58
92 TestFunctional/parallel/InternationalLanguage 0.3
93 TestFunctional/parallel/StatusCmd 1.22
98 TestFunctional/parallel/AddonsCmd 0.19
99 TestFunctional/parallel/PersistentVolumeClaim 24.97
101 TestFunctional/parallel/SSHCmd 0.73
102 TestFunctional/parallel/CpCmd 2.4
104 TestFunctional/parallel/FileSync 0.29
105 TestFunctional/parallel/CertSync 2.06
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.78
113 TestFunctional/parallel/License 0.32
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.68
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.48
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
127 TestFunctional/parallel/ProfileCmd/profile_list 0.42
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
129 TestFunctional/parallel/MountCmd/any-port 8.01
130 TestFunctional/parallel/MountCmd/specific-port 2
131 TestFunctional/parallel/MountCmd/VerifyCleanup 1.42
132 TestFunctional/parallel/ServiceCmd/List 0.66
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.65
137 TestFunctional/parallel/Version/short 0.06
138 TestFunctional/parallel/Version/components 1.39
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.88
144 TestFunctional/parallel/ImageCommands/Setup 0.67
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 209.81
163 TestMultiControlPlane/serial/DeployApp 8.07
164 TestMultiControlPlane/serial/PingHostFromPods 1.58
165 TestMultiControlPlane/serial/AddWorkerNode 59.89
166 TestMultiControlPlane/serial/NodeLabels 0.11
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.08
168 TestMultiControlPlane/serial/CopyFile 20.53
169 TestMultiControlPlane/serial/StopSecondaryNode 12.92
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.86
171 TestMultiControlPlane/serial/RestartSecondaryNode 26.11
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.14
176 TestMultiControlPlane/serial/StopCluster 24.3
177 TestMultiControlPlane/serial/RestartCluster 90.61
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.81
179 TestMultiControlPlane/serial/AddSecondaryNode 80.43
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.12
184 TestJSONOutput/start/Command 80.36
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.82
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.26
209 TestKicCustomNetwork/create_custom_network 44.76
210 TestKicCustomNetwork/use_default_bridge_network 36.02
211 TestKicExistingNetwork 33.73
212 TestKicCustomSubnet 38.43
213 TestKicStaticIP 36.71
214 TestMainNoArgs 0.06
215 TestMinikubeProfile 69.96
218 TestMountStart/serial/StartWithMountFirst 10.68
219 TestMountStart/serial/VerifyMountFirst 0.27
220 TestMountStart/serial/StartWithMountSecond 6.86
221 TestMountStart/serial/VerifyMountSecond 0.28
222 TestMountStart/serial/DeleteFirst 1.72
223 TestMountStart/serial/VerifyMountPostDelete 0.28
224 TestMountStart/serial/Stop 1.29
225 TestMountStart/serial/RestartStopped 8.39
226 TestMountStart/serial/VerifyMountPostStop 0.28
229 TestMultiNode/serial/FreshStart2Nodes 141.03
230 TestMultiNode/serial/DeployApp2Nodes 5.34
231 TestMultiNode/serial/PingHostFrom2Pods 0.93
232 TestMultiNode/serial/AddNode 58.27
233 TestMultiNode/serial/MultiNodeLabels 0.09
234 TestMultiNode/serial/ProfileList 0.74
235 TestMultiNode/serial/CopyFile 10.69
236 TestMultiNode/serial/StopNode 2.48
237 TestMultiNode/serial/StartAfterStop 8.31
238 TestMultiNode/serial/RestartKeepsNodes 72.15
239 TestMultiNode/serial/DeleteNode 5.99
240 TestMultiNode/serial/StopMultiNode 24.01
241 TestMultiNode/serial/RestartMultiNode 49.33
242 TestMultiNode/serial/ValidateNameConflict 38.6
247 TestPreload 127.12
249 TestScheduledStopUnix 112.08
252 TestInsufficientStorage 14.51
253 TestRunningBinaryUpgrade 50.69
255 TestKubernetesUpgrade 358.97
256 TestMissingContainerUpgrade 113.23
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
259 TestNoKubernetes/serial/StartWithK8s 47.46
260 TestNoKubernetes/serial/StartWithStopK8s 28.97
261 TestNoKubernetes/serial/Start 9.61
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
263 TestNoKubernetes/serial/ProfileList 1.13
264 TestNoKubernetes/serial/Stop 1.37
265 TestNoKubernetes/serial/StartNoArgs 7.63
266 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
267 TestStoppedBinaryUpgrade/Setup 0.67
268 TestStoppedBinaryUpgrade/Upgrade 73.65
269 TestStoppedBinaryUpgrade/MinikubeLogs 1.22
278 TestPause/serial/Start 83.67
279 TestPause/serial/SecondStartNoReconfiguration 30.79
288 TestNetworkPlugins/group/false 5.9
293 TestStartStop/group/old-k8s-version/serial/FirstStart 65.73
294 TestStartStop/group/old-k8s-version/serial/DeployApp 9.38
296 TestStartStop/group/old-k8s-version/serial/Stop 12
297 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
298 TestStartStop/group/old-k8s-version/serial/SecondStart 52.9
299 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
300 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
301 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
304 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 79.66
306 TestStartStop/group/embed-certs/serial/FirstStart 77.04
307 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.35
309 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.03
310 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
311 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 54.05
312 TestStartStop/group/embed-certs/serial/DeployApp 9.44
314 TestStartStop/group/embed-certs/serial/Stop 12.12
315 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
316 TestStartStop/group/embed-certs/serial/SecondStart 55.34
317 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
318 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.14
319 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.34
322 TestStartStop/group/no-preload/serial/FirstStart 66.3
323 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
324 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
325 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
328 TestStartStop/group/newest-cni/serial/FirstStart 42.15
329 TestStartStop/group/no-preload/serial/DeployApp 8.48
331 TestStartStop/group/no-preload/serial/Stop 12.3
332 TestStartStop/group/newest-cni/serial/DeployApp 0
334 TestStartStop/group/newest-cni/serial/Stop 1.36
335 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
336 TestStartStop/group/newest-cni/serial/SecondStart 20.85
337 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.26
338 TestStartStop/group/no-preload/serial/SecondStart 55.98
339 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.38
343 TestNetworkPlugins/group/auto/Start 86.1
344 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
345 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
346 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.32
348 TestNetworkPlugins/group/kindnet/Start 79.64
349 TestNetworkPlugins/group/auto/KubeletFlags 0.38
350 TestNetworkPlugins/group/auto/NetCatPod 10.34
351 TestNetworkPlugins/group/auto/DNS 0.18
352 TestNetworkPlugins/group/auto/Localhost 0.15
353 TestNetworkPlugins/group/auto/HairPin 0.15
354 TestNetworkPlugins/group/calico/Start 82.93
355 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
356 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
357 TestNetworkPlugins/group/kindnet/NetCatPod 12.41
358 TestNetworkPlugins/group/kindnet/DNS 0.2
359 TestNetworkPlugins/group/kindnet/Localhost 0.15
360 TestNetworkPlugins/group/kindnet/HairPin 0.17
361 TestNetworkPlugins/group/custom-flannel/Start 65.89
362 TestNetworkPlugins/group/calico/ControllerPod 6.01
363 TestNetworkPlugins/group/calico/KubeletFlags 0.42
364 TestNetworkPlugins/group/calico/NetCatPod 11.3
365 TestNetworkPlugins/group/calico/DNS 0.22
366 TestNetworkPlugins/group/calico/Localhost 0.18
367 TestNetworkPlugins/group/calico/HairPin 0.2
368 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.57
369 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.37
370 TestNetworkPlugins/group/enable-default-cni/Start 87.39
371 TestNetworkPlugins/group/custom-flannel/DNS 0.16
372 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
373 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
374 TestNetworkPlugins/group/flannel/Start 62.43
375 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
376 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.3
377 TestNetworkPlugins/group/flannel/ControllerPod 6
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
381 TestNetworkPlugins/group/flannel/KubeletFlags 0.4
382 TestNetworkPlugins/group/flannel/NetCatPod 11.42
383 TestNetworkPlugins/group/flannel/DNS 0.19
384 TestNetworkPlugins/group/flannel/Localhost 0.19
385 TestNetworkPlugins/group/flannel/HairPin 0.2
386 TestNetworkPlugins/group/bridge/Start 81.02
387 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
388 TestNetworkPlugins/group/bridge/NetCatPod 10.26
389 TestNetworkPlugins/group/bridge/DNS 0.15
390 TestNetworkPlugins/group/bridge/Localhost 0.13
391 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.28.0/json-events (5.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-770401 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-770401 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.361922415s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1025 09:46:09.575069  261256 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1025 09:46:09.575158  261256 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-770401
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-770401: exit status 85 (84.620184ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-770401 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-770401 │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:46:04
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:46:04.259535  261261 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:46:04.259738  261261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:46:04.259764  261261 out.go:374] Setting ErrFile to fd 2...
	I1025 09:46:04.259783  261261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:46:04.260067  261261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	W1025 09:46:04.260244  261261 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21767-259409/.minikube/config/config.json: open /home/jenkins/minikube-integration/21767-259409/.minikube/config/config.json: no such file or directory
	I1025 09:46:04.260711  261261 out.go:368] Setting JSON to true
	I1025 09:46:04.261599  261261 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5316,"bootTime":1761380249,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 09:46:04.261694  261261 start.go:141] virtualization:  
	I1025 09:46:04.265923  261261 out.go:99] [download-only-770401] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1025 09:46:04.266118  261261 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball: no such file or directory
	I1025 09:46:04.266185  261261 notify.go:220] Checking for updates...
	I1025 09:46:04.269213  261261 out.go:171] MINIKUBE_LOCATION=21767
	I1025 09:46:04.272248  261261 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:46:04.275178  261261 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 09:46:04.278118  261261 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 09:46:04.280999  261261 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1025 09:46:04.286465  261261 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 09:46:04.286721  261261 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:46:04.321498  261261 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:46:04.321635  261261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:46:04.376079  261261 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-25 09:46:04.367158258 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:46:04.376189  261261 docker.go:318] overlay module found
	I1025 09:46:04.379274  261261 out.go:99] Using the docker driver based on user configuration
	I1025 09:46:04.379314  261261 start.go:305] selected driver: docker
	I1025 09:46:04.379326  261261 start.go:925] validating driver "docker" against <nil>
	I1025 09:46:04.379424  261261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:46:04.427937  261261 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-25 09:46:04.418168272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:46:04.428105  261261 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:46:04.428398  261261 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1025 09:46:04.428566  261261 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 09:46:04.431657  261261 out.go:171] Using Docker driver with root privileges
	I1025 09:46:04.434620  261261 cni.go:84] Creating CNI manager for ""
	I1025 09:46:04.434692  261261 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:46:04.434705  261261 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:46:04.434791  261261 start.go:349] cluster config:
	{Name:download-only-770401 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-770401 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:46:04.437922  261261 out.go:99] Starting "download-only-770401" primary control-plane node in "download-only-770401" cluster
	I1025 09:46:04.437962  261261 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:46:04.440962  261261 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:46:04.441019  261261 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 09:46:04.441109  261261 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:46:04.457553  261261 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 09:46:04.457779  261261 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1025 09:46:04.457891  261261 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 09:46:04.504090  261261 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1025 09:46:04.504117  261261 cache.go:58] Caching tarball of preloaded images
	I1025 09:46:04.504301  261261 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 09:46:04.507533  261261 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1025 09:46:04.507566  261261 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1025 09:46:04.597735  261261 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1025 09:46:04.597900  261261 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1025 09:46:08.220618  261261 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1025 09:46:08.221072  261261 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/download-only-770401/config.json ...
	I1025 09:46:08.221129  261261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/download-only-770401/config.json: {Name:mk0988f6bfa30a173825711dfaa51273f1c29a9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:46:08.221356  261261 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 09:46:08.221603  261261 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-770401 host does not exist
	  To start a cluster, run: "minikube start -p download-only-770401"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-770401
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (5.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-865577 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-865577 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.790382259s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (5.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1025 09:46:15.820601  261256 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1025 09:46:15.820644  261256 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-865577
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-865577: exit status 85 (162.235492ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-770401 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-770401 │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:46 UTC │
	│ delete  │ -p download-only-770401                                                                                                                                                   │ download-only-770401 │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │ 25 Oct 25 09:46 UTC │
	│ start   │ -o=json --download-only -p download-only-865577 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-865577 │ jenkins │ v1.37.0 │ 25 Oct 25 09:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:46:10
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:46:10.083369  261457 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:46:10.083563  261457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:46:10.083573  261457 out.go:374] Setting ErrFile to fd 2...
	I1025 09:46:10.083579  261457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:46:10.083845  261457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 09:46:10.084299  261457 out.go:368] Setting JSON to true
	I1025 09:46:10.085231  261457 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5321,"bootTime":1761380249,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 09:46:10.085310  261457 start.go:141] virtualization:  
	I1025 09:46:10.088916  261457 out.go:99] [download-only-865577] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:46:10.089214  261457 notify.go:220] Checking for updates...
	I1025 09:46:10.092184  261457 out.go:171] MINIKUBE_LOCATION=21767
	I1025 09:46:10.095438  261457 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:46:10.098530  261457 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 09:46:10.101467  261457 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 09:46:10.104554  261457 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1025 09:46:10.110250  261457 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 09:46:10.110611  261457 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:46:10.141154  261457 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:46:10.141283  261457 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:46:10.202797  261457 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-25 09:46:10.192730394 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:46:10.202909  261457 docker.go:318] overlay module found
	I1025 09:46:10.205936  261457 out.go:99] Using the docker driver based on user configuration
	I1025 09:46:10.205993  261457 start.go:305] selected driver: docker
	I1025 09:46:10.206002  261457 start.go:925] validating driver "docker" against <nil>
	I1025 09:46:10.206113  261457 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:46:10.266746  261457 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-25 09:46:10.257441025 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:46:10.266904  261457 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:46:10.267179  261457 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1025 09:46:10.267329  261457 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 09:46:10.270425  261457 out.go:171] Using Docker driver with root privileges
	I1025 09:46:10.273288  261457 cni.go:84] Creating CNI manager for ""
	I1025 09:46:10.273357  261457 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:46:10.273371  261457 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:46:10.273447  261457 start.go:349] cluster config:
	{Name:download-only-865577 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-865577 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:46:10.276376  261457 out.go:99] Starting "download-only-865577" primary control-plane node in "download-only-865577" cluster
	I1025 09:46:10.276405  261457 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:46:10.279218  261457 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:46:10.279248  261457 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:46:10.279414  261457 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:46:10.295514  261457 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 09:46:10.295634  261457 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1025 09:46:10.295653  261457 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1025 09:46:10.295658  261457 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1025 09:46:10.295665  261457 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1025 09:46:10.337917  261457 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 09:46:10.337941  261457 cache.go:58] Caching tarball of preloaded images
	I1025 09:46:10.338134  261457 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:46:10.341361  261457 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1025 09:46:10.341392  261457 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1025 09:46:10.419419  261457 preload.go:290] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1025 09:46:10.419475  261457 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21767-259409/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-865577 host does not exist
	  To start a cluster, run: "minikube start -p download-only-865577"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-865577
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1025 09:46:17.075902  261256 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-439045 --alsologtostderr --binary-mirror http://127.0.0.1:39931 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-439045" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-439045
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-184548
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-184548: exit status 85 (71.669546ms)

                                                
                                                
-- stdout --
	* Profile "addons-184548" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-184548"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-184548
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-184548: exit status 85 (74.121466ms)

                                                
                                                
-- stdout --
	* Profile "addons-184548" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-184548"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (182.7s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-184548 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-184548 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m2.7032777s)
--- PASS: TestAddons/Setup (182.70s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-184548 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-184548 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.81s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-184548 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-184548 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [51b1b07b-43e0-41be-99ba-823ac3bf80c9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [51b1b07b-43e0-41be-99ba-823ac3bf80c9] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003830446s
addons_test.go:694: (dbg) Run:  kubectl --context addons-184548 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-184548 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-184548 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-184548 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.81s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.48s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-184548
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-184548: (12.188363246s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-184548
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-184548
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-184548
--- PASS: TestAddons/StoppedEnableDisable (12.48s)

                                                
                                    
x
+
TestCertOptions (45.79s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-771620 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1025 10:51:20.958109  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-771620 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (42.90356088s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-771620 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-771620 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-771620 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-771620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-771620
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-771620: (2.120904908s)
--- PASS: TestCertOptions (45.79s)

                                                
                                    
x
+
TestCertExpiration (259.87s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-736062 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-736062 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (47.64816191s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-736062 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-736062 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (29.50961115s)
helpers_test.go:175: Cleaning up "cert-expiration-736062" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-736062
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-736062: (2.706733853s)
--- PASS: TestCertExpiration (259.87s)

                                                
                                    
x
+
TestForceSystemdFlag (36.6s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-759136 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1025 10:49:21.255596  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-759136 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.712495642s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-759136 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-759136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-759136
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-759136: (2.589787907s)
--- PASS: TestForceSystemdFlag (36.60s)

                                                
                                    
x
+
TestForceSystemdEnv (47.69s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-623432 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-623432 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (44.738334723s)
helpers_test.go:175: Cleaning up "force-systemd-env-623432" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-623432
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-623432: (2.948560342s)
--- PASS: TestForceSystemdEnv (47.69s)

                                                
                                    
x
+
TestErrorSpam/setup (33.81s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-045579 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-045579 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-045579 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-045579 --driver=docker  --container-runtime=crio: (33.81275698s)
--- PASS: TestErrorSpam/setup (33.81s)

                                                
                                    
x
+
TestErrorSpam/start (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 start --dry-run
--- PASS: TestErrorSpam/start (0.82s)

                                                
                                    
x
+
TestErrorSpam/status (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 status
--- PASS: TestErrorSpam/status (1.15s)

                                                
                                    
x
+
TestErrorSpam/pause (5.88s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 pause: exit status 80 (1.873624113s)

                                                
                                                
-- stdout --
	* Pausing node nospam-045579 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:53:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 pause: exit status 80 (2.244959318s)

                                                
                                                
-- stdout --
	* Pausing node nospam-045579 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:53:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 pause: exit status 80 (1.756648258s)

                                                
                                                
-- stdout --
	* Pausing node nospam-045579 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:53:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.88s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.13s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 unpause: exit status 80 (1.73733447s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-045579 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:53:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 unpause: exit status 80 (1.636954728s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-045579 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:53:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 unpause: exit status 80 (1.748423374s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-045579 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:53:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.13s)

                                                
                                    
x
+
TestErrorSpam/stop (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 stop: (1.317501469s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-045579 --log_dir /tmp/nospam-045579 stop
--- PASS: TestErrorSpam/stop (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21767-259409/.minikube/files/etc/test/nested/copy/261256/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (83.13s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-558907 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1025 09:54:21.259516  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:54:21.266088  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:54:21.277822  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:54:21.302113  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:54:21.343452  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:54:21.424850  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:54:21.586327  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:54:21.908100  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:54:22.550134  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:54:23.831478  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:54:26.394130  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:54:31.515543  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:54:41.758346  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:55:02.240305  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-558907 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m23.132792973s)
--- PASS: TestFunctional/serial/StartWithProxy (83.13s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (26.24s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1025 09:55:05.444658  261256 config.go:182] Loaded profile config "functional-558907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-558907 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-558907 --alsologtostderr -v=8: (26.23667508s)
functional_test.go:678: soft start took 26.241291703s for "functional-558907" cluster.
I1025 09:55:31.681686  261256 config.go:182] Loaded profile config "functional-558907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (26.24s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-558907 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-558907 cache add registry.k8s.io/pause:3.1: (1.165074771s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-558907 cache add registry.k8s.io/pause:3.3: (1.14787883s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-558907 cache add registry.k8s.io/pause:latest: (1.121627055s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-558907 /tmp/TestFunctionalserialCacheCmdcacheadd_local3391211528/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 cache add minikube-local-cache-test:functional-558907
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 cache delete minikube-local-cache-test:functional-558907
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-558907
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-558907 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (288.914064ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 kubectl -- --context functional-558907 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-558907 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.95s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-558907 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1025 09:55:43.201797  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-558907 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.94648724s)
functional_test.go:776: restart took 32.946582051s for "functional-558907" cluster.
I1025 09:56:11.984077  261256 config.go:182] Loaded profile config "functional-558907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (32.95s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-558907 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-558907 logs: (1.47504037s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 logs --file /tmp/TestFunctionalserialLogsFileCmd1884108005/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-558907 logs --file /tmp/TestFunctionalserialLogsFileCmd1884108005/001/logs.txt: (1.469933247s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-558907 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-558907
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-558907: exit status 115 (390.678633ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30315 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-558907 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.00s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-558907 config get cpus: exit status 14 (81.961597ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-558907 config get cpus: exit status 14 (66.907617ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-558907 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-558907 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 287593: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.32s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-558907 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-558907 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (243.656849ms)

                                                
                                                
-- stdout --
	* [functional-558907] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:06:47.733122  287052 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:06:47.733505  287052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:06:47.733515  287052 out.go:374] Setting ErrFile to fd 2...
	I1025 10:06:47.733521  287052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:06:47.733817  287052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:06:47.734279  287052 out.go:368] Setting JSON to false
	I1025 10:06:47.735312  287052 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6559,"bootTime":1761380249,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 10:06:47.735397  287052 start.go:141] virtualization:  
	I1025 10:06:47.739367  287052 out.go:179] * [functional-558907] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:06:47.742262  287052 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:06:47.742426  287052 notify.go:220] Checking for updates...
	I1025 10:06:47.748544  287052 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:06:47.751501  287052 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:06:47.754994  287052 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 10:06:47.757955  287052 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:06:47.760851  287052 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:06:47.764450  287052 config.go:182] Loaded profile config "functional-558907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:06:47.765017  287052 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:06:47.815786  287052 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:06:47.815936  287052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:06:47.882879  287052 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:06:47.870882919 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:06:47.882992  287052 docker.go:318] overlay module found
	I1025 10:06:47.886419  287052 out.go:179] * Using the docker driver based on existing profile
	I1025 10:06:47.889435  287052 start.go:305] selected driver: docker
	I1025 10:06:47.889453  287052 start.go:925] validating driver "docker" against &{Name:functional-558907 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-558907 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:06:47.889574  287052 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:06:47.893194  287052 out.go:203] 
	W1025 10:06:47.896346  287052 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1025 10:06:47.899297  287052 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-558907 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-558907 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-558907 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (294.958244ms)

                                                
                                                
-- stdout --
	* [functional-558907] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:06:47.447919  286957 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:06:47.450167  286957 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:06:47.450236  286957 out.go:374] Setting ErrFile to fd 2...
	I1025 10:06:47.450258  286957 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:06:47.452303  286957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:06:47.452799  286957 out.go:368] Setting JSON to false
	I1025 10:06:47.453715  286957 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6559,"bootTime":1761380249,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 10:06:47.453815  286957 start.go:141] virtualization:  
	I1025 10:06:47.457586  286957 out.go:179] * [functional-558907] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1025 10:06:47.460598  286957 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:06:47.460778  286957 notify.go:220] Checking for updates...
	I1025 10:06:47.466985  286957 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:06:47.469923  286957 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:06:47.472747  286957 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 10:06:47.475654  286957 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:06:47.478561  286957 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:06:47.481835  286957 config.go:182] Loaded profile config "functional-558907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:06:47.482543  286957 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:06:47.548531  286957 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:06:47.548661  286957 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:06:47.640460  286957 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:06:47.630679248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:06:47.640573  286957 docker.go:318] overlay module found
	I1025 10:06:47.644343  286957 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1025 10:06:47.646285  286957 start.go:305] selected driver: docker
	I1025 10:06:47.646303  286957 start.go:925] validating driver "docker" against &{Name:functional-558907 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-558907 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:06:47.646406  286957 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:06:47.650102  286957 out.go:203] 
	W1025 10:06:47.652931  286957 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 10:06:47.655773  286957 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [fb41ca7d-0d8d-443a-99f3-5ca45633e048] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003368874s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-558907 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-558907 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-558907 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-558907 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [3e373411-ee68-4b84-b0bf-30d3a2c82786] Pending
helpers_test.go:352: "sp-pod" [3e373411-ee68-4b84-b0bf-30d3a2c82786] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [3e373411-ee68-4b84-b0bf-30d3a2c82786] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004088472s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-558907 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-558907 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-558907 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [39992b5d-1857-4c79-9095-a836a6604123] Pending
helpers_test.go:352: "sp-pod" [39992b5d-1857-4c79-9095-a836a6604123] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003606453s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-558907 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.97s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh -n functional-558907 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 cp functional-558907:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd596004806/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh -n functional-558907 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh -n functional-558907 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/261256/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh "sudo cat /etc/test/nested/copy/261256/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/261256.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh "sudo cat /etc/ssl/certs/261256.pem"
2025/10/25 10:07:02 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/261256.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh "sudo cat /usr/share/ca-certificates/261256.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2612562.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh "sudo cat /etc/ssl/certs/2612562.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2612562.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh "sudo cat /usr/share/ca-certificates/2612562.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-558907 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-558907 ssh "sudo systemctl is-active docker": exit status 1 (386.203947ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-558907 ssh "sudo systemctl is-active containerd": exit status 1 (388.977842ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-558907 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-558907 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-558907 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 283425: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-558907 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-558907 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-558907 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [9e5f7b0c-a3ca-4b1a-810a-fcd9fb3545f2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [9e5f7b0c-a3ca-4b1a-810a-fcd9fb3545f2] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.00444261s
I1025 09:56:29.427738  261256 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-558907 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.96.7 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-558907 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "362.234289ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "58.921527ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "395.970005ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "54.016564ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-558907 /tmp/TestFunctionalparallelMountCmdany-port651378387/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761386794715953833" to /tmp/TestFunctionalparallelMountCmdany-port651378387/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761386794715953833" to /tmp/TestFunctionalparallelMountCmdany-port651378387/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761386794715953833" to /tmp/TestFunctionalparallelMountCmdany-port651378387/001/test-1761386794715953833
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-558907 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (393.469449ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 10:06:35.110498  261256 retry.go:31] will retry after 507.511718ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 25 10:06 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 25 10:06 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 25 10:06 test-1761386794715953833
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh cat /mount-9p/test-1761386794715953833
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-558907 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [88bb9748-157c-42cb-99b3-7613148bd0d1] Pending
helpers_test.go:352: "busybox-mount" [88bb9748-157c-42cb-99b3-7613148bd0d1] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [88bb9748-157c-42cb-99b3-7613148bd0d1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [88bb9748-157c-42cb-99b3-7613148bd0d1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003354047s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-558907 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-558907 /tmp/TestFunctionalparallelMountCmdany-port651378387/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-558907 /tmp/TestFunctionalparallelMountCmdspecific-port1899527882/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-558907 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (353.436862ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 10:06:43.078895  261256 retry.go:31] will retry after 561.35042ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-558907 /tmp/TestFunctionalparallelMountCmdspecific-port1899527882/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-558907 ssh "sudo umount -f /mount-9p": exit status 1 (289.996794ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-558907 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-558907 /tmp/TestFunctionalparallelMountCmdspecific-port1899527882/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-558907 /tmp/TestFunctionalparallelMountCmdVerifyCleanup561050122/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-558907 /tmp/TestFunctionalparallelMountCmdVerifyCleanup561050122/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-558907 /tmp/TestFunctionalparallelMountCmdVerifyCleanup561050122/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-558907 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-558907 /tmp/TestFunctionalparallelMountCmdVerifyCleanup561050122/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-558907 /tmp/TestFunctionalparallelMountCmdVerifyCleanup561050122/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-558907 /tmp/TestFunctionalparallelMountCmdVerifyCleanup561050122/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 service list -o json
functional_test.go:1504: Took "645.906064ms" to run "out/minikube-linux-arm64 -p functional-558907 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-558907 version -o=json --components: (1.38992515s)
--- PASS: TestFunctional/parallel/Version/components (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-558907 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-558907 image ls --format short --alsologtostderr:
I1025 10:07:04.092427  289435 out.go:360] Setting OutFile to fd 1 ...
I1025 10:07:04.092692  289435 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 10:07:04.092734  289435 out.go:374] Setting ErrFile to fd 2...
I1025 10:07:04.092755  289435 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 10:07:04.093097  289435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
I1025 10:07:04.093849  289435 config.go:182] Loaded profile config "functional-558907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 10:07:04.094080  289435 config.go:182] Loaded profile config "functional-558907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 10:07:04.095167  289435 cli_runner.go:164] Run: docker container inspect functional-558907 --format={{.State.Status}}
I1025 10:07:04.118861  289435 ssh_runner.go:195] Run: systemctl --version
I1025 10:07:04.118931  289435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558907
I1025 10:07:04.140093  289435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/functional-558907/id_rsa Username:docker}
I1025 10:07:04.259802  289435 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-558907 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ docker.io/library/nginx                 │ latest             │ e612b97116b41 │ 176MB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ docker.io/library/nginx                 │ alpine             │ 9c92f55c0336c │ 54.7MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-558907 image ls --format table --alsologtostderr:
I1025 10:07:04.924177  289689 out.go:360] Setting OutFile to fd 1 ...
I1025 10:07:04.924406  289689 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 10:07:04.924414  289689 out.go:374] Setting ErrFile to fd 2...
I1025 10:07:04.924419  289689 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 10:07:04.924744  289689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
I1025 10:07:04.925962  289689 config.go:182] Loaded profile config "functional-558907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 10:07:04.926115  289689 config.go:182] Loaded profile config "functional-558907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 10:07:04.926576  289689 cli_runner.go:164] Run: docker container inspect functional-558907 --format={{.State.Status}}
I1025 10:07:04.947501  289689 ssh_runner.go:195] Run: systemctl --version
I1025 10:07:04.947568  289689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558907
I1025 10:07:04.965777  289689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/functional-558907/id_rsa Username:docker}
I1025 10:07:05.080468  289689 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-558907 image ls --format json --alsologtostderr:
[{"id":"e612b97116b41d24816faa9fd204e1177027648a2cb14bb627dd1eaab1494e8f","repoDigests":["docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903","docker.io/library/nginx@sha256:68e62e210589c349f01d82308b45fbd6fb9b855f8b12cb27e11ad48dbfd0e43f"],"repoTags":["docker.io/library/nginx:latest"],"size":"176071022"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852
a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"s
ize":"247562353"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metric
s-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa","repoD
igests":["docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0","docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54704654"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"4391
1e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-558907 image ls --format json --alsologtostderr:
I1025 10:07:04.652639  289616 out.go:360] Setting OutFile to fd 1 ...
I1025 10:07:04.652903  289616 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 10:07:04.652911  289616 out.go:374] Setting ErrFile to fd 2...
I1025 10:07:04.652916  289616 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 10:07:04.653212  289616 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
I1025 10:07:04.653842  289616 config.go:182] Loaded profile config "functional-558907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 10:07:04.653954  289616 config.go:182] Loaded profile config "functional-558907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 10:07:04.654486  289616 cli_runner.go:164] Run: docker container inspect functional-558907 --format={{.State.Status}}
I1025 10:07:04.673420  289616 ssh_runner.go:195] Run: systemctl --version
I1025 10:07:04.673528  289616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558907
I1025 10:07:04.696889  289616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/functional-558907/id_rsa Username:docker}
I1025 10:07:04.801401  289616 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-558907 image ls --format yaml --alsologtostderr:
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: e612b97116b41d24816faa9fd204e1177027648a2cb14bb627dd1eaab1494e8f
repoDigests:
- docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903
- docker.io/library/nginx@sha256:68e62e210589c349f01d82308b45fbd6fb9b855f8b12cb27e11ad48dbfd0e43f
repoTags:
- docker.io/library/nginx:latest
size: "176071022"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa
repoDigests:
- docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
repoTags:
- docker.io/library/nginx:alpine
size: "54704654"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-558907 image ls --format yaml --alsologtostderr:
I1025 10:07:04.360197  289529 out.go:360] Setting OutFile to fd 1 ...
I1025 10:07:04.360365  289529 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 10:07:04.360401  289529 out.go:374] Setting ErrFile to fd 2...
I1025 10:07:04.360422  289529 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 10:07:04.360692  289529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
I1025 10:07:04.361310  289529 config.go:182] Loaded profile config "functional-558907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 10:07:04.361479  289529 config.go:182] Loaded profile config "functional-558907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 10:07:04.361962  289529 cli_runner.go:164] Run: docker container inspect functional-558907 --format={{.State.Status}}
I1025 10:07:04.381559  289529 ssh_runner.go:195] Run: systemctl --version
I1025 10:07:04.381610  289529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558907
I1025 10:07:04.418200  289529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/functional-558907/id_rsa Username:docker}
I1025 10:07:04.541069  289529 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-558907 ssh pgrep buildkitd: exit status 1 (362.280106ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 image build -t localhost/my-image:functional-558907 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-558907 image build -t localhost/my-image:functional-558907 testdata/build --alsologtostderr: (3.276276747s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-558907 image build -t localhost/my-image:functional-558907 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> c0b0bb7746d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-558907
--> fcdf8b1e143
Successfully tagged localhost/my-image:functional-558907
fcdf8b1e143033c611376b43ec06de0d03e7c481d2230a3c5b15b9a731680e89
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-558907 image build -t localhost/my-image:functional-558907 testdata/build --alsologtostderr:
I1025 10:07:04.828602  289667 out.go:360] Setting OutFile to fd 1 ...
I1025 10:07:04.830897  289667 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 10:07:04.830948  289667 out.go:374] Setting ErrFile to fd 2...
I1025 10:07:04.830988  289667 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 10:07:04.831302  289667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
I1025 10:07:04.832449  289667 config.go:182] Loaded profile config "functional-558907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 10:07:04.833847  289667 config.go:182] Loaded profile config "functional-558907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 10:07:04.834413  289667 cli_runner.go:164] Run: docker container inspect functional-558907 --format={{.State.Status}}
I1025 10:07:04.861579  289667 ssh_runner.go:195] Run: systemctl --version
I1025 10:07:04.861652  289667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558907
I1025 10:07:04.883233  289667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/functional-558907/id_rsa Username:docker}
I1025 10:07:04.990480  289667 build_images.go:161] Building image from path: /tmp/build.2898041606.tar
I1025 10:07:04.990552  289667 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1025 10:07:04.998318  289667 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2898041606.tar
I1025 10:07:05.009956  289667 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2898041606.tar: stat -c "%s %y" /var/lib/minikube/build/build.2898041606.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2898041606.tar': No such file or directory
I1025 10:07:05.010108  289667 ssh_runner.go:362] scp /tmp/build.2898041606.tar --> /var/lib/minikube/build/build.2898041606.tar (3072 bytes)
I1025 10:07:05.037714  289667 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2898041606
I1025 10:07:05.047643  289667 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2898041606 -xf /var/lib/minikube/build/build.2898041606.tar
I1025 10:07:05.057158  289667 crio.go:315] Building image: /var/lib/minikube/build/build.2898041606
I1025 10:07:05.057231  289667 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-558907 /var/lib/minikube/build/build.2898041606 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1025 10:07:08.015727  289667 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-558907 /var/lib/minikube/build/build.2898041606 --cgroup-manager=cgroupfs: (2.958471041s)
I1025 10:07:08.015816  289667 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2898041606
I1025 10:07:08.024536  289667 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2898041606.tar
I1025 10:07:08.032561  289667 build_images.go:217] Built localhost/my-image:functional-558907 from /tmp/build.2898041606.tar
I1025 10:07:08.032602  289667 build_images.go:133] succeeded building to: functional-558907
I1025 10:07:08.032607  289667 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-558907
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 image rm kicbase/echo-server:functional-558907 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-558907 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-558907
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-558907
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-558907
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (209.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1025 10:09:21.255576  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-480889 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m28.903287604s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (209.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 kubectl -- rollout status deployment/busybox
E1025 10:10:44.327642  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-480889 kubectl -- rollout status deployment/busybox: (5.261779035s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 kubectl -- exec busybox-7b57f96db7-cmlf6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 kubectl -- exec busybox-7b57f96db7-gzkw5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 kubectl -- exec busybox-7b57f96db7-wkwwg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 kubectl -- exec busybox-7b57f96db7-cmlf6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 kubectl -- exec busybox-7b57f96db7-gzkw5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 kubectl -- exec busybox-7b57f96db7-wkwwg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 kubectl -- exec busybox-7b57f96db7-cmlf6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 kubectl -- exec busybox-7b57f96db7-gzkw5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 kubectl -- exec busybox-7b57f96db7-wkwwg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 kubectl -- exec busybox-7b57f96db7-cmlf6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 kubectl -- exec busybox-7b57f96db7-cmlf6 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 kubectl -- exec busybox-7b57f96db7-gzkw5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 kubectl -- exec busybox-7b57f96db7-gzkw5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 kubectl -- exec busybox-7b57f96db7-wkwwg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 kubectl -- exec busybox-7b57f96db7-wkwwg -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 node add --alsologtostderr -v 5
E1025 10:11:20.951661  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:11:20.958078  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:11:20.969520  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:11:20.990973  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:11:21.032440  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:11:21.113861  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:11:21.275328  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:11:21.596906  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:11:22.238912  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:11:23.520310  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:11:26.081677  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:11:31.203373  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:11:41.444881  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-480889 node add --alsologtostderr -v 5: (58.799089462s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-480889 status --alsologtostderr -v 5: (1.087820489s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-480889 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.080017312s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-480889 status --output json --alsologtostderr -v 5: (1.085014595s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 cp testdata/cp-test.txt ha-480889:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 cp ha-480889:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3016407791/001/cp-test_ha-480889.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 cp ha-480889:/home/docker/cp-test.txt ha-480889-m02:/home/docker/cp-test_ha-480889_ha-480889-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889-m02 "sudo cat /home/docker/cp-test_ha-480889_ha-480889-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 cp ha-480889:/home/docker/cp-test.txt ha-480889-m03:/home/docker/cp-test_ha-480889_ha-480889-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889-m03 "sudo cat /home/docker/cp-test_ha-480889_ha-480889-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 cp ha-480889:/home/docker/cp-test.txt ha-480889-m04:/home/docker/cp-test_ha-480889_ha-480889-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889-m04 "sudo cat /home/docker/cp-test_ha-480889_ha-480889-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 cp testdata/cp-test.txt ha-480889-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 cp ha-480889-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3016407791/001/cp-test_ha-480889-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 cp ha-480889-m02:/home/docker/cp-test.txt ha-480889:/home/docker/cp-test_ha-480889-m02_ha-480889.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889 "sudo cat /home/docker/cp-test_ha-480889-m02_ha-480889.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 cp ha-480889-m02:/home/docker/cp-test.txt ha-480889-m03:/home/docker/cp-test_ha-480889-m02_ha-480889-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889-m03 "sudo cat /home/docker/cp-test_ha-480889-m02_ha-480889-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 cp ha-480889-m02:/home/docker/cp-test.txt ha-480889-m04:/home/docker/cp-test_ha-480889-m02_ha-480889-m04.txt
E1025 10:12:01.927252  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889-m04 "sudo cat /home/docker/cp-test_ha-480889-m02_ha-480889-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 cp testdata/cp-test.txt ha-480889-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 cp ha-480889-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3016407791/001/cp-test_ha-480889-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 cp ha-480889-m03:/home/docker/cp-test.txt ha-480889:/home/docker/cp-test_ha-480889-m03_ha-480889.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889 "sudo cat /home/docker/cp-test_ha-480889-m03_ha-480889.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 cp ha-480889-m03:/home/docker/cp-test.txt ha-480889-m02:/home/docker/cp-test_ha-480889-m03_ha-480889-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889-m02 "sudo cat /home/docker/cp-test_ha-480889-m03_ha-480889-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 cp ha-480889-m03:/home/docker/cp-test.txt ha-480889-m04:/home/docker/cp-test_ha-480889-m03_ha-480889-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889-m04 "sudo cat /home/docker/cp-test_ha-480889-m03_ha-480889-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 cp testdata/cp-test.txt ha-480889-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 cp ha-480889-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3016407791/001/cp-test_ha-480889-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 cp ha-480889-m04:/home/docker/cp-test.txt ha-480889:/home/docker/cp-test_ha-480889-m04_ha-480889.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889 "sudo cat /home/docker/cp-test_ha-480889-m04_ha-480889.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 cp ha-480889-m04:/home/docker/cp-test.txt ha-480889-m02:/home/docker/cp-test_ha-480889-m04_ha-480889-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889-m02 "sudo cat /home/docker/cp-test_ha-480889-m04_ha-480889-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 cp ha-480889-m04:/home/docker/cp-test.txt ha-480889-m03:/home/docker/cp-test_ha-480889-m04_ha-480889-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 ssh -n ha-480889-m03 "sudo cat /home/docker/cp-test_ha-480889-m04_ha-480889-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-480889 node stop m02 --alsologtostderr -v 5: (12.12040255s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-480889 status --alsologtostderr -v 5: exit status 7 (798.463337ms)

                                                
                                                
-- stdout --
	ha-480889
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-480889-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-480889-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-480889-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:12:24.435720  304622 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:12:24.435841  304622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:12:24.435855  304622 out.go:374] Setting ErrFile to fd 2...
	I1025 10:12:24.435861  304622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:12:24.436158  304622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:12:24.436392  304622 out.go:368] Setting JSON to false
	I1025 10:12:24.436440  304622 mustload.go:65] Loading cluster: ha-480889
	I1025 10:12:24.436499  304622 notify.go:220] Checking for updates...
	I1025 10:12:24.437829  304622 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:12:24.437858  304622 status.go:174] checking status of ha-480889 ...
	I1025 10:12:24.438654  304622 cli_runner.go:164] Run: docker container inspect ha-480889 --format={{.State.Status}}
	I1025 10:12:24.459940  304622 status.go:371] ha-480889 host status = "Running" (err=<nil>)
	I1025 10:12:24.459969  304622 host.go:66] Checking if "ha-480889" exists ...
	I1025 10:12:24.460274  304622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889
	I1025 10:12:24.489513  304622 host.go:66] Checking if "ha-480889" exists ...
	I1025 10:12:24.489892  304622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:12:24.489942  304622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889
	I1025 10:12:24.510623  304622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889/id_rsa Username:docker}
	I1025 10:12:24.615623  304622 ssh_runner.go:195] Run: systemctl --version
	I1025 10:12:24.622850  304622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:12:24.636748  304622 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:12:24.701485  304622 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-25 10:12:24.690179817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:12:24.702291  304622 kubeconfig.go:125] found "ha-480889" server: "https://192.168.49.254:8443"
	I1025 10:12:24.702342  304622 api_server.go:166] Checking apiserver status ...
	I1025 10:12:24.702399  304622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:12:24.714553  304622 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1244/cgroup
	I1025 10:12:24.723019  304622 api_server.go:182] apiserver freezer: "9:freezer:/docker/808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb/crio/crio-7964cc8d46ad873041a99b85a81c2be52ad63b9fcc890b9ff566a68eb256f1ca"
	I1025 10:12:24.723101  304622 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/808d21fd84e34dddc8caecb8f87253793a2d10bc9683dc6e2285736f4558b9fb/crio/crio-7964cc8d46ad873041a99b85a81c2be52ad63b9fcc890b9ff566a68eb256f1ca/freezer.state
	I1025 10:12:24.730894  304622 api_server.go:204] freezer state: "THAWED"
	I1025 10:12:24.730925  304622 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1025 10:12:24.739429  304622 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1025 10:12:24.739472  304622 status.go:463] ha-480889 apiserver status = Running (err=<nil>)
	I1025 10:12:24.739509  304622 status.go:176] ha-480889 status: &{Name:ha-480889 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 10:12:24.739545  304622 status.go:174] checking status of ha-480889-m02 ...
	I1025 10:12:24.739874  304622 cli_runner.go:164] Run: docker container inspect ha-480889-m02 --format={{.State.Status}}
	I1025 10:12:24.757840  304622 status.go:371] ha-480889-m02 host status = "Stopped" (err=<nil>)
	I1025 10:12:24.757863  304622 status.go:384] host is not running, skipping remaining checks
	I1025 10:12:24.757870  304622 status.go:176] ha-480889-m02 status: &{Name:ha-480889-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 10:12:24.757891  304622 status.go:174] checking status of ha-480889-m03 ...
	I1025 10:12:24.758256  304622 cli_runner.go:164] Run: docker container inspect ha-480889-m03 --format={{.State.Status}}
	I1025 10:12:24.776154  304622 status.go:371] ha-480889-m03 host status = "Running" (err=<nil>)
	I1025 10:12:24.776180  304622 host.go:66] Checking if "ha-480889-m03" exists ...
	I1025 10:12:24.776487  304622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m03
	I1025 10:12:24.793496  304622 host.go:66] Checking if "ha-480889-m03" exists ...
	I1025 10:12:24.793813  304622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:12:24.793869  304622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m03
	I1025 10:12:24.812172  304622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m03/id_rsa Username:docker}
	I1025 10:12:24.920264  304622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:12:24.938699  304622 kubeconfig.go:125] found "ha-480889" server: "https://192.168.49.254:8443"
	I1025 10:12:24.938729  304622 api_server.go:166] Checking apiserver status ...
	I1025 10:12:24.938772  304622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:12:24.950756  304622 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1185/cgroup
	I1025 10:12:24.959798  304622 api_server.go:182] apiserver freezer: "9:freezer:/docker/d85d80643a0ee59d8044924015128458fd344dfc9810531eea31d789c2ad9b19/crio/crio-b4d9abd2262b0c48a34fd2fb68a493e27a62cd35857ab8a148ead91cb4b196ce"
	I1025 10:12:24.959953  304622 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d85d80643a0ee59d8044924015128458fd344dfc9810531eea31d789c2ad9b19/crio/crio-b4d9abd2262b0c48a34fd2fb68a493e27a62cd35857ab8a148ead91cb4b196ce/freezer.state
	I1025 10:12:24.969289  304622 api_server.go:204] freezer state: "THAWED"
	I1025 10:12:24.969318  304622 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1025 10:12:24.977623  304622 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1025 10:12:24.977653  304622 status.go:463] ha-480889-m03 apiserver status = Running (err=<nil>)
	I1025 10:12:24.977663  304622 status.go:176] ha-480889-m03 status: &{Name:ha-480889-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 10:12:24.977691  304622 status.go:174] checking status of ha-480889-m04 ...
	I1025 10:12:24.978063  304622 cli_runner.go:164] Run: docker container inspect ha-480889-m04 --format={{.State.Status}}
	I1025 10:12:24.996093  304622 status.go:371] ha-480889-m04 host status = "Running" (err=<nil>)
	I1025 10:12:24.996123  304622 host.go:66] Checking if "ha-480889-m04" exists ...
	I1025 10:12:24.996419  304622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-480889-m04
	I1025 10:12:25.021264  304622 host.go:66] Checking if "ha-480889-m04" exists ...
	I1025 10:12:25.021581  304622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:12:25.021634  304622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-480889-m04
	I1025 10:12:25.045736  304622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/ha-480889-m04/id_rsa Username:docker}
	I1025 10:12:25.147408  304622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:12:25.161318  304622 status.go:176] ha-480889-m04 status: &{Name:ha-480889-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (26.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 node start m02 --alsologtostderr -v 5
E1025 10:12:42.889002  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-480889 node start m02 --alsologtostderr -v 5: (24.78348832s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-480889 status --alsologtostderr -v 5: (1.215211581s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (26.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.136590677s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (24.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-480889 stop --alsologtostderr -v 5: (24.182634538s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-480889 status --alsologtostderr -v 5: exit status 7 (116.005315ms)

                                                
                                                
-- stdout --
	ha-480889
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-480889-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-480889-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:22:24.698749  315811 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:22:24.698972  315811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:22:24.699000  315811 out.go:374] Setting ErrFile to fd 2...
	I1025 10:22:24.699018  315811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:22:24.699320  315811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:22:24.699566  315811 out.go:368] Setting JSON to false
	I1025 10:22:24.699630  315811 mustload.go:65] Loading cluster: ha-480889
	I1025 10:22:24.699696  315811 notify.go:220] Checking for updates...
	I1025 10:22:24.701218  315811 config.go:182] Loaded profile config "ha-480889": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:22:24.701350  315811 status.go:174] checking status of ha-480889 ...
	I1025 10:22:24.702719  315811 cli_runner.go:164] Run: docker container inspect ha-480889 --format={{.State.Status}}
	I1025 10:22:24.721710  315811 status.go:371] ha-480889 host status = "Stopped" (err=<nil>)
	I1025 10:22:24.721732  315811 status.go:384] host is not running, skipping remaining checks
	I1025 10:22:24.721739  315811 status.go:176] ha-480889 status: &{Name:ha-480889 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 10:22:24.721768  315811 status.go:174] checking status of ha-480889-m02 ...
	I1025 10:22:24.722220  315811 cli_runner.go:164] Run: docker container inspect ha-480889-m02 --format={{.State.Status}}
	I1025 10:22:24.745422  315811 status.go:371] ha-480889-m02 host status = "Stopped" (err=<nil>)
	I1025 10:22:24.745442  315811 status.go:384] host is not running, skipping remaining checks
	I1025 10:22:24.745449  315811 status.go:176] ha-480889-m02 status: &{Name:ha-480889-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 10:22:24.745469  315811 status.go:174] checking status of ha-480889-m04 ...
	I1025 10:22:24.745776  315811 cli_runner.go:164] Run: docker container inspect ha-480889-m04 --format={{.State.Status}}
	I1025 10:22:24.762911  315811 status.go:371] ha-480889-m04 host status = "Stopped" (err=<nil>)
	I1025 10:22:24.762936  315811 status.go:384] host is not running, skipping remaining checks
	I1025 10:22:24.762943  315811 status.go:176] ha-480889-m04 status: &{Name:ha-480889-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (24.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (90.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-480889 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m29.640277666s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (90.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 node add --control-plane --alsologtostderr -v 5
E1025 10:24:21.255247  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-480889 node add --control-plane --alsologtostderr -v 5: (1m19.312432872s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-480889 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-480889 status --alsologtostderr -v 5: (1.113196834s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.122527428s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.12s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.36s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-974051 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1025 10:26:20.956703  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-974051 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m20.350670794s)
--- PASS: TestJSONOutput/start/Command (80.36s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.82s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-974051 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-974051 --output=json --user=testUser: (5.818263561s)
--- PASS: TestJSONOutput/stop/Command (5.82s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-374863 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-374863 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (105.079358ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7ecc71e8-833e-4b0a-a9c8-0f3a9358572e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-374863] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"be239e81-dc1e-447e-9a0b-a6583064a2f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21767"}}
	{"specversion":"1.0","id":"285982cd-7812-445d-b21c-d508efde75f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6dcacc4c-bef6-48ab-b51e-de7804cf7f30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig"}}
	{"specversion":"1.0","id":"4bf0fa72-34d9-4a1c-b233-375e2df02fbf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube"}}
	{"specversion":"1.0","id":"ede86815-090d-480f-bcb5-2daa7bbcd204","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"69e42df2-9638-41a0-9c1e-f8929533da06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a895ec37-4f7d-4429-994b-ed1859ab62d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-374863" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-374863
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.76s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-000916 --network=
E1025 10:27:24.331112  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:27:44.015423  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-000916 --network=: (42.55994173s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-000916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-000916
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-000916: (2.181174789s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.76s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.02s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-249167 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-249167 --network=bridge: (33.859136941s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-249167" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-249167
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-249167: (2.140762549s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.02s)

                                                
                                    
x
+
TestKicExistingNetwork (33.73s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1025 10:28:24.660588  261256 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1025 10:28:24.677247  261256 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1025 10:28:24.677342  261256 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1025 10:28:24.677360  261256 cli_runner.go:164] Run: docker network inspect existing-network
W1025 10:28:24.693380  261256 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1025 10:28:24.693413  261256 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1025 10:28:24.693426  261256 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1025 10:28:24.693528  261256 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1025 10:28:24.712688  261256 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2218a4d410c8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:56:a0:c3:54:c6:1f} reservation:<nil>}
I1025 10:28:24.713014  261256 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001759470}
I1025 10:28:24.713039  261256 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1025 10:28:24.713093  261256 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1025 10:28:24.770867  261256 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-894085 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-894085 --network=existing-network: (31.416056727s)
helpers_test.go:175: Cleaning up "existing-network-894085" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-894085
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-894085: (2.164850818s)
I1025 10:28:58.369542  261256 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (33.73s)

                                                
                                    
x
+
TestKicCustomSubnet (38.43s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-436397 --subnet=192.168.60.0/24
E1025 10:29:21.257917  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-436397 --subnet=192.168.60.0/24: (36.118056339s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-436397 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-436397" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-436397
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-436397: (2.269140739s)
--- PASS: TestKicCustomSubnet (38.43s)

                                                
                                    
x
+
TestKicStaticIP (36.71s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-498413 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-498413 --static-ip=192.168.200.200: (34.247976609s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-498413 ip
helpers_test.go:175: Cleaning up "static-ip-498413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-498413
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-498413: (2.273193605s)
--- PASS: TestKicStaticIP (36.71s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (69.96s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-178081 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-178081 --driver=docker  --container-runtime=crio: (31.710620157s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-181158 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-181158 --driver=docker  --container-runtime=crio: (32.520749835s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-178081
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-181158
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-181158" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-181158
E1025 10:31:20.950876  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-181158: (2.208588465s)
helpers_test.go:175: Cleaning up "first-178081" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-178081
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-178081: (2.112302589s)
--- PASS: TestMinikubeProfile (69.96s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.68s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-746000 --memory=3072 --mount-string /tmp/TestMountStartserial3539219892/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-746000 --memory=3072 --mount-string /tmp/TestMountStartserial3539219892/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (9.677214055s)
--- PASS: TestMountStart/serial/StartWithMountFirst (10.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-746000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-747961 --memory=3072 --mount-string /tmp/TestMountStartserial3539219892/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-747961 --memory=3072 --mount-string /tmp/TestMountStartserial3539219892/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.855833243s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-747961 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-746000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-746000 --alsologtostderr -v=5: (1.724398205s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-747961 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-747961
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-747961: (1.290438725s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.39s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-747961
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-747961: (7.392557216s)
--- PASS: TestMountStart/serial/RestartStopped (8.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-747961 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (141.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-688420 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-688420 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m20.472623137s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (141.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688420 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688420 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-688420 -- rollout status deployment/busybox: (3.417586332s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688420 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688420 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688420 -- exec busybox-7b57f96db7-ttmfn -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688420 -- exec busybox-7b57f96db7-vq7nq -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688420 -- exec busybox-7b57f96db7-ttmfn -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688420 -- exec busybox-7b57f96db7-vq7nq -- nslookup kubernetes.default
E1025 10:34:21.255766  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688420 -- exec busybox-7b57f96db7-ttmfn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688420 -- exec busybox-7b57f96db7-vq7nq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.34s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688420 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688420 -- exec busybox-7b57f96db7-ttmfn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688420 -- exec busybox-7b57f96db7-ttmfn -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688420 -- exec busybox-7b57f96db7-vq7nq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688420 -- exec busybox-7b57f96db7-vq7nq -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-688420 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-688420 -v=5 --alsologtostderr: (57.537256237s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.27s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-688420 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 cp testdata/cp-test.txt multinode-688420:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 ssh -n multinode-688420 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 cp multinode-688420:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2437544643/001/cp-test_multinode-688420.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 ssh -n multinode-688420 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 cp multinode-688420:/home/docker/cp-test.txt multinode-688420-m02:/home/docker/cp-test_multinode-688420_multinode-688420-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 ssh -n multinode-688420 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 ssh -n multinode-688420-m02 "sudo cat /home/docker/cp-test_multinode-688420_multinode-688420-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 cp multinode-688420:/home/docker/cp-test.txt multinode-688420-m03:/home/docker/cp-test_multinode-688420_multinode-688420-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 ssh -n multinode-688420 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 ssh -n multinode-688420-m03 "sudo cat /home/docker/cp-test_multinode-688420_multinode-688420-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 cp testdata/cp-test.txt multinode-688420-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 ssh -n multinode-688420-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 cp multinode-688420-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2437544643/001/cp-test_multinode-688420-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 ssh -n multinode-688420-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 cp multinode-688420-m02:/home/docker/cp-test.txt multinode-688420:/home/docker/cp-test_multinode-688420-m02_multinode-688420.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 ssh -n multinode-688420-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 ssh -n multinode-688420 "sudo cat /home/docker/cp-test_multinode-688420-m02_multinode-688420.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 cp multinode-688420-m02:/home/docker/cp-test.txt multinode-688420-m03:/home/docker/cp-test_multinode-688420-m02_multinode-688420-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 ssh -n multinode-688420-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 ssh -n multinode-688420-m03 "sudo cat /home/docker/cp-test_multinode-688420-m02_multinode-688420-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 cp testdata/cp-test.txt multinode-688420-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 ssh -n multinode-688420-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 cp multinode-688420-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2437544643/001/cp-test_multinode-688420-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 ssh -n multinode-688420-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 cp multinode-688420-m03:/home/docker/cp-test.txt multinode-688420:/home/docker/cp-test_multinode-688420-m03_multinode-688420.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 ssh -n multinode-688420-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 ssh -n multinode-688420 "sudo cat /home/docker/cp-test_multinode-688420-m03_multinode-688420.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 cp multinode-688420-m03:/home/docker/cp-test.txt multinode-688420-m02:/home/docker/cp-test_multinode-688420-m03_multinode-688420-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 ssh -n multinode-688420-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 ssh -n multinode-688420-m02 "sudo cat /home/docker/cp-test_multinode-688420-m03_multinode-688420-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-688420 node stop m03: (1.359615211s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-688420 status: exit status 7 (544.078148ms)

                                                
                                                
-- stdout --
	multinode-688420
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-688420-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-688420-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-688420 status --alsologtostderr: exit status 7 (569.26565ms)

                                                
                                                
-- stdout --
	multinode-688420
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-688420-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-688420-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:35:34.518819  366545 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:35:34.518971  366545 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:35:34.518983  366545 out.go:374] Setting ErrFile to fd 2...
	I1025 10:35:34.518989  366545 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:35:34.519473  366545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:35:34.519793  366545 out.go:368] Setting JSON to false
	I1025 10:35:34.519834  366545 mustload.go:65] Loading cluster: multinode-688420
	I1025 10:35:34.526933  366545 config.go:182] Loaded profile config "multinode-688420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:35:34.527048  366545 status.go:174] checking status of multinode-688420 ...
	I1025 10:35:34.529069  366545 cli_runner.go:164] Run: docker container inspect multinode-688420 --format={{.State.Status}}
	I1025 10:35:34.529759  366545 notify.go:220] Checking for updates...
	I1025 10:35:34.554561  366545 status.go:371] multinode-688420 host status = "Running" (err=<nil>)
	I1025 10:35:34.554588  366545 host.go:66] Checking if "multinode-688420" exists ...
	I1025 10:35:34.554913  366545 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-688420
	I1025 10:35:34.578092  366545 host.go:66] Checking if "multinode-688420" exists ...
	I1025 10:35:34.578396  366545 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:35:34.578461  366545 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-688420
	I1025 10:35:34.597462  366545 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33263 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/multinode-688420/id_rsa Username:docker}
	I1025 10:35:34.699547  366545 ssh_runner.go:195] Run: systemctl --version
	I1025 10:35:34.706008  366545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:35:34.719259  366545 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:35:34.792404  366545 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 10:35:34.776349817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:35:34.792951  366545 kubeconfig.go:125] found "multinode-688420" server: "https://192.168.67.2:8443"
	I1025 10:35:34.792997  366545 api_server.go:166] Checking apiserver status ...
	I1025 10:35:34.793049  366545 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:35:34.804548  366545 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1249/cgroup
	I1025 10:35:34.813135  366545 api_server.go:182] apiserver freezer: "9:freezer:/docker/7f131a034160ef40368d1f4854a4df8e9b49088a033f89703f7f3e234f49bb03/crio/crio-7d45fed08fbd48969690694f5ddbdd830a3adf692ca6a2aadef7a4376cfaa64d"
	I1025 10:35:34.813215  366545 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7f131a034160ef40368d1f4854a4df8e9b49088a033f89703f7f3e234f49bb03/crio/crio-7d45fed08fbd48969690694f5ddbdd830a3adf692ca6a2aadef7a4376cfaa64d/freezer.state
	I1025 10:35:34.820958  366545 api_server.go:204] freezer state: "THAWED"
	I1025 10:35:34.820987  366545 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1025 10:35:34.829169  366545 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1025 10:35:34.829195  366545 status.go:463] multinode-688420 apiserver status = Running (err=<nil>)
	I1025 10:35:34.829206  366545 status.go:176] multinode-688420 status: &{Name:multinode-688420 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 10:35:34.829223  366545 status.go:174] checking status of multinode-688420-m02 ...
	I1025 10:35:34.829522  366545 cli_runner.go:164] Run: docker container inspect multinode-688420-m02 --format={{.State.Status}}
	I1025 10:35:34.846653  366545 status.go:371] multinode-688420-m02 host status = "Running" (err=<nil>)
	I1025 10:35:34.846678  366545 host.go:66] Checking if "multinode-688420-m02" exists ...
	I1025 10:35:34.846979  366545 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-688420-m02
	I1025 10:35:34.864252  366545 host.go:66] Checking if "multinode-688420-m02" exists ...
	I1025 10:35:34.864576  366545 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:35:34.864621  366545 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-688420-m02
	I1025 10:35:34.882323  366545 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33268 SSHKeyPath:/home/jenkins/minikube-integration/21767-259409/.minikube/machines/multinode-688420-m02/id_rsa Username:docker}
	I1025 10:35:34.983247  366545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:35:35.000777  366545 status.go:176] multinode-688420-m02 status: &{Name:multinode-688420-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1025 10:35:35.000826  366545 status.go:174] checking status of multinode-688420-m03 ...
	I1025 10:35:35.001227  366545 cli_runner.go:164] Run: docker container inspect multinode-688420-m03 --format={{.State.Status}}
	I1025 10:35:35.026996  366545 status.go:371] multinode-688420-m03 host status = "Stopped" (err=<nil>)
	I1025 10:35:35.027019  366545 status.go:384] host is not running, skipping remaining checks
	I1025 10:35:35.027043  366545 status.go:176] multinode-688420-m03 status: &{Name:multinode-688420-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.48s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-688420 node start m03 -v=5 --alsologtostderr: (7.46698232s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (72.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-688420
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-688420
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-688420: (25.032845092s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-688420 --wait=true -v=5 --alsologtostderr
E1025 10:36:20.950602  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-688420 --wait=true -v=5 --alsologtostderr: (46.995418784s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-688420
--- PASS: TestMultiNode/serial/RestartKeepsNodes (72.15s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-688420 node delete m03: (5.259637959s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-688420 stop: (23.816994s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-688420 status: exit status 7 (88.955626ms)

                                                
                                                
-- stdout --
	multinode-688420
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-688420-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-688420 status --alsologtostderr: exit status 7 (99.373178ms)

                                                
                                                
-- stdout --
	multinode-688420
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-688420-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:37:25.443929  374318 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:37:25.444047  374318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:37:25.444060  374318 out.go:374] Setting ErrFile to fd 2...
	I1025 10:37:25.444066  374318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:37:25.444346  374318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:37:25.444546  374318 out.go:368] Setting JSON to false
	I1025 10:37:25.444594  374318 mustload.go:65] Loading cluster: multinode-688420
	I1025 10:37:25.444660  374318 notify.go:220] Checking for updates...
	I1025 10:37:25.448359  374318 config.go:182] Loaded profile config "multinode-688420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:37:25.448393  374318 status.go:174] checking status of multinode-688420 ...
	I1025 10:37:25.448961  374318 cli_runner.go:164] Run: docker container inspect multinode-688420 --format={{.State.Status}}
	I1025 10:37:25.466558  374318 status.go:371] multinode-688420 host status = "Stopped" (err=<nil>)
	I1025 10:37:25.466583  374318 status.go:384] host is not running, skipping remaining checks
	I1025 10:37:25.466591  374318 status.go:176] multinode-688420 status: &{Name:multinode-688420 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 10:37:25.466618  374318 status.go:174] checking status of multinode-688420-m02 ...
	I1025 10:37:25.466925  374318 cli_runner.go:164] Run: docker container inspect multinode-688420-m02 --format={{.State.Status}}
	I1025 10:37:25.496381  374318 status.go:371] multinode-688420-m02 host status = "Stopped" (err=<nil>)
	I1025 10:37:25.496407  374318 status.go:384] host is not running, skipping remaining checks
	I1025 10:37:25.496413  374318 status.go:176] multinode-688420-m02 status: &{Name:multinode-688420-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-688420 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-688420 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (48.623638239s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688420 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.33s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (38.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-688420
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-688420-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-688420-m02 --driver=docker  --container-runtime=crio: exit status 14 (94.536383ms)

                                                
                                                
-- stdout --
	* [multinode-688420-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-688420-m02' is duplicated with machine name 'multinode-688420-m02' in profile 'multinode-688420'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-688420-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-688420-m03 --driver=docker  --container-runtime=crio: (35.990205323s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-688420
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-688420: exit status 80 (356.584998ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-688420 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-688420-m03 already exists in multinode-688420-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-688420-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-688420-m03: (2.099321261s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (38.60s)

                                                
                                    
x
+
TestPreload (127.12s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-861451 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1025 10:39:21.255010  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-861451 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m6.602348507s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-861451 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-861451 image pull gcr.io/k8s-minikube/busybox: (2.380742619s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-861451
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-861451: (5.900103235s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-861451 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-861451 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (49.549744568s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-861451 image list
helpers_test.go:175: Cleaning up "test-preload-861451" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-861451
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-861451: (2.437838187s)
--- PASS: TestPreload (127.12s)

                                                
                                    
x
+
TestScheduledStopUnix (112.08s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-606742 --memory=3072 --driver=docker  --container-runtime=crio
E1025 10:41:20.951397  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-606742 --memory=3072 --driver=docker  --container-runtime=crio: (35.456359001s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-606742 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-606742 -n scheduled-stop-606742
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-606742 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1025 10:41:40.915092  261256 retry.go:31] will retry after 130.275µs: open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/scheduled-stop-606742/pid: no such file or directory
I1025 10:41:40.918088  261256 retry.go:31] will retry after 137.759µs: open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/scheduled-stop-606742/pid: no such file or directory
I1025 10:41:40.919370  261256 retry.go:31] will retry after 332.225µs: open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/scheduled-stop-606742/pid: no such file or directory
I1025 10:41:40.920475  261256 retry.go:31] will retry after 390.19µs: open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/scheduled-stop-606742/pid: no such file or directory
I1025 10:41:40.921600  261256 retry.go:31] will retry after 473.802µs: open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/scheduled-stop-606742/pid: no such file or directory
I1025 10:41:40.922721  261256 retry.go:31] will retry after 403.324µs: open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/scheduled-stop-606742/pid: no such file or directory
I1025 10:41:40.923860  261256 retry.go:31] will retry after 1.706235ms: open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/scheduled-stop-606742/pid: no such file or directory
I1025 10:41:40.927718  261256 retry.go:31] will retry after 2.324425ms: open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/scheduled-stop-606742/pid: no such file or directory
I1025 10:41:40.930940  261256 retry.go:31] will retry after 2.490691ms: open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/scheduled-stop-606742/pid: no such file or directory
I1025 10:41:40.934156  261256 retry.go:31] will retry after 3.776802ms: open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/scheduled-stop-606742/pid: no such file or directory
I1025 10:41:40.938368  261256 retry.go:31] will retry after 3.788119ms: open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/scheduled-stop-606742/pid: no such file or directory
I1025 10:41:40.942615  261256 retry.go:31] will retry after 11.712473ms: open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/scheduled-stop-606742/pid: no such file or directory
I1025 10:41:40.955064  261256 retry.go:31] will retry after 17.338443ms: open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/scheduled-stop-606742/pid: no such file or directory
I1025 10:41:40.973333  261256 retry.go:31] will retry after 15.914943ms: open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/scheduled-stop-606742/pid: no such file or directory
I1025 10:41:40.989589  261256 retry.go:31] will retry after 37.804945ms: open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/scheduled-stop-606742/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-606742 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-606742 -n scheduled-stop-606742
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-606742
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-606742 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-606742
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-606742: exit status 7 (73.07747ms)

                                                
                                                
-- stdout --
	scheduled-stop-606742
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-606742 -n scheduled-stop-606742
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-606742 -n scheduled-stop-606742: exit status 7 (71.615162ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-606742" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-606742
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-606742: (5.020726546s)
--- PASS: TestScheduledStopUnix (112.08s)

                                                
                                    
x
+
TestInsufficientStorage (14.51s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-661899 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-661899 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.912731807s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7ffac910-4c6b-4c1a-ba06-90ae4d66948d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-661899] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"78ba593d-cb53-46c4-9b1a-aab46c1e9828","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21767"}}
	{"specversion":"1.0","id":"00aa0674-70d8-4977-89dd-0bb791b18703","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4cd8579b-0290-47d0-9073-e44bb0c43587","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig"}}
	{"specversion":"1.0","id":"74320f0f-c895-49e0-ab9a-42ea5f0a1ae5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube"}}
	{"specversion":"1.0","id":"94b6eb2e-aa48-4e16-a4e8-34cd6a02874e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"18263e9c-a614-4053-87a8-eca49c0387d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3569be8f-7733-4aff-b5f8-c2f4be14e76d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"72408e8e-9d1c-4c79-860b-b370e9661a6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"270e6f47-8aea-49ba-9a2c-a724c018e364","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"86e0735d-66f4-4031-9da0-d8d7d8d93f3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"2187889e-f302-499b-bc15-88042f418d1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-661899\" primary control-plane node in \"insufficient-storage-661899\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"280c0f4e-d264-458e-917b-81198490f442","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"fc9178b5-a835-43c4-b6c3-5935bb923acc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"3b6b490c-e67f-49ff-8d1c-02203dd2eedc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-661899 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-661899 --output=json --layout=cluster: exit status 7 (314.387353ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-661899","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-661899","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 10:43:09.218127  390579 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-661899" does not appear in /home/jenkins/minikube-integration/21767-259409/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-661899 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-661899 --output=json --layout=cluster: exit status 7 (301.297863ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-661899","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-661899","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 10:43:09.520843  390645 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-661899" does not appear in /home/jenkins/minikube-integration/21767-259409/kubeconfig
	E1025 10:43:09.530748  390645 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/insufficient-storage-661899/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-661899" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-661899
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-661899: (1.979482748s)
--- PASS: TestInsufficientStorage (14.51s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (50.69s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1031928636 start -p running-upgrade-031456 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1031928636 start -p running-upgrade-031456 --memory=3072 --vm-driver=docker  --container-runtime=crio: (29.896130322s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-031456 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-031456 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.090117349s)
helpers_test.go:175: Cleaning up "running-upgrade-031456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-031456
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-031456: (1.994406972s)
--- PASS: TestRunningBinaryUpgrade (50.69s)

                                                
                                    
x
+
TestKubernetesUpgrade (358.97s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-291330 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-291330 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (45.384651218s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-291330
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-291330: (1.574508855s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-291330 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-291330 status --format={{.Host}}: exit status 7 (127.737968ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-291330 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-291330 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m34.855422824s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-291330 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-291330 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-291330 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (145.973973ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-291330] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-291330
	    minikube start -p kubernetes-upgrade-291330 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2913302 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-291330 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-291330 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-291330 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.078755354s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-291330" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-291330
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-291330: (2.683825472s)
--- PASS: TestKubernetesUpgrade (358.97s)

                                                
                                    
x
+
TestMissingContainerUpgrade (113.23s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1737342444 start -p missing-upgrade-486371 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1737342444 start -p missing-upgrade-486371 --memory=3072 --driver=docker  --container-runtime=crio: (58.476995411s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-486371
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-486371
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-486371 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1025 10:44:21.255698  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:44:24.017117  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-486371 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (51.198945837s)
helpers_test.go:175: Cleaning up "missing-upgrade-486371" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-486371
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-486371: (2.161639077s)
--- PASS: TestMissingContainerUpgrade (113.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-670512 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-670512 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (92.892407ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-670512] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (47.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-670512 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-670512 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (46.805647884s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-670512 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (47.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (28.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-670512 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1025 10:44:04.337771  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-670512 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.61057208s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-670512 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-670512 status -o json: exit status 2 (308.10694ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-670512","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-670512
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-670512: (2.048883829s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (28.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-670512 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-670512 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.605769666s)
--- PASS: TestNoKubernetes/serial/Start (9.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-670512 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-670512 "sudo systemctl is-active --quiet service kubelet": exit status 1 (281.659951ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-670512
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-670512: (1.370180798s)
--- PASS: TestNoKubernetes/serial/Stop (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-670512 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-670512 --driver=docker  --container-runtime=crio: (7.626613581s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-670512 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-670512 "sudo systemctl is-active --quiet service kubelet": exit status 1 (344.76923ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (73.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3347456915 start -p stopped-upgrade-190411 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3347456915 start -p stopped-upgrade-190411 --memory=3072 --vm-driver=docker  --container-runtime=crio: (42.184040108s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3347456915 -p stopped-upgrade-190411 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3347456915 -p stopped-upgrade-190411 stop: (12.044882889s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-190411 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-190411 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.420826794s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (73.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-190411
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-190411: (1.219276181s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.22s)

                                                
                                    
x
+
TestPause/serial/Start (83.67s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-494622 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-494622 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m23.6722151s)
--- PASS: TestPause/serial/Start (83.67s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.79s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-494622 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-494622 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.760699944s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-759329 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-759329 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (252.376419ms)

                                                
                                                
-- stdout --
	* [false-759329] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:49:58.554819  427648 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:49:58.554943  427648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:49:58.554948  427648 out.go:374] Setting ErrFile to fd 2...
	I1025 10:49:58.554953  427648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:49:58.555222  427648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-259409/.minikube/bin
	I1025 10:49:58.555625  427648 out.go:368] Setting JSON to false
	I1025 10:49:58.556442  427648 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9150,"bootTime":1761380249,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1025 10:49:58.556502  427648 start.go:141] virtualization:  
	I1025 10:49:58.560084  427648 out.go:179] * [false-759329] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:49:58.564073  427648 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:49:58.564240  427648 notify.go:220] Checking for updates...
	I1025 10:49:58.568051  427648 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:49:58.571065  427648 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-259409/kubeconfig
	I1025 10:49:58.574127  427648 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-259409/.minikube
	I1025 10:49:58.577198  427648 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:49:58.580155  427648 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:49:58.583725  427648 config.go:182] Loaded profile config "kubernetes-upgrade-291330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:49:58.583909  427648 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:49:58.620120  427648 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:49:58.620252  427648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:49:58.718117  427648 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:49:58.707594851 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:49:58.718220  427648 docker.go:318] overlay module found
	I1025 10:49:58.721192  427648 out.go:179] * Using the docker driver based on user configuration
	I1025 10:49:58.723996  427648 start.go:305] selected driver: docker
	I1025 10:49:58.724015  427648 start.go:925] validating driver "docker" against <nil>
	I1025 10:49:58.724029  427648 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:49:58.727605  427648 out.go:203] 
	W1025 10:49:58.730513  427648 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1025 10:49:58.733293  427648 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-759329 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-759329

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-759329

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-759329

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-759329

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-759329

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-759329

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-759329

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-759329

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-759329

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-759329

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-759329

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-759329" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-759329" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 25 Oct 2025 10:45:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-291330
contexts:
- context:
cluster: kubernetes-upgrade-291330
user: kubernetes-upgrade-291330
name: kubernetes-upgrade-291330
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-291330
user:
client-certificate: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/kubernetes-upgrade-291330/client.crt
client-key: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/kubernetes-upgrade-291330/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-759329

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-759329"

                                                
                                                
----------------------- debugLogs end: false-759329 [took: 5.411994846s] --------------------------------
helpers_test.go:175: Cleaning up "false-759329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-759329
--- PASS: TestNetworkPlugins/group/false (5.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (65.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-031983 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-031983 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m5.728646355s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (65.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-031983 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3ef55609-5cc4-4fa3-879c-98e876c9ac41] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3ef55609-5cc4-4fa3-879c-98e876c9ac41] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004085281s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-031983 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-031983 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-031983 --alsologtostderr -v=3: (12.000852853s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-031983 -n old-k8s-version-031983
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-031983 -n old-k8s-version-031983: exit status 7 (96.865346ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-031983 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (52.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-031983 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-031983 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (52.473666858s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-031983 -n old-k8s-version-031983
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (52.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-d2zpz" [540dc871-78a9-4dd4-adb6-ae9d0481d23c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003429442s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-d2zpz" [540dc871-78a9-4dd4-adb6-ae9d0481d23c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003374229s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-031983 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-031983 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-223394 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-223394 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m19.661078747s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (77.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-348342 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-348342 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m17.036092702s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (77.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-223394 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9fe5f74f-d071-4f2d-8540-22336c347abd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9fe5f74f-d071-4f2d-8540-22336c347abd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004087771s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-223394 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-223394 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-223394 --alsologtostderr -v=3: (12.02652125s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-223394 -n default-k8s-diff-port-223394
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-223394 -n default-k8s-diff-port-223394: exit status 7 (99.861068ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-223394 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-223394 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1025 10:56:20.951209  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-223394 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.533520864s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-223394 -n default-k8s-diff-port-223394
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-348342 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a1179500-843c-448c-966c-265e80b91b4f] Pending
helpers_test.go:352: "busybox" [a1179500-843c-448c-966c-265e80b91b4f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a1179500-843c-448c-966c-265e80b91b4f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005657853s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-348342 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-348342 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-348342 --alsologtostderr -v=3: (12.12109567s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-348342 -n embed-certs-348342
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-348342 -n embed-certs-348342: exit status 7 (74.381442ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-348342 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (55.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-348342 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-348342 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (54.788456612s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-348342 -n embed-certs-348342
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (55.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wmhcq" [795df323-7eee-4cc1-b1fd-5f0214b39706] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005190976s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wmhcq" [795df323-7eee-4cc1-b1fd-5f0214b39706] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004430872s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-223394 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-223394 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-093313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-093313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m6.304015634s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-g46wr" [80e0bcd4-8c33-402a-ae1a-b8fbcd2183cf] Running
E1025 10:57:49.490060  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:57:49.497165  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:57:49.508591  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:57:49.530719  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:57:49.572076  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:57:49.653439  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:57:49.814931  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:57:50.136272  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:57:50.778588  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:57:52.060852  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003641568s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-g46wr" [80e0bcd4-8c33-402a-ae1a-b8fbcd2183cf] Running
E1025 10:57:54.622405  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004059934s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-348342 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-348342 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-374679 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1025 10:58:30.467142  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-374679 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (42.153239465s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-093313 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [75418b38-6328-42b9-b710-7cee6dc929c2] Pending
helpers_test.go:352: "busybox" [75418b38-6328-42b9-b710-7cee6dc929c2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [75418b38-6328-42b9-b710-7cee6dc929c2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003846799s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-093313 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-093313 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-093313 --alsologtostderr -v=3: (12.300278438s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-374679 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-374679 --alsologtostderr -v=3: (1.362036437s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-374679 -n newest-cni-374679
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-374679 -n newest-cni-374679: exit status 7 (72.013029ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-374679 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (20.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-374679 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-374679 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (20.295447345s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-374679 -n newest-cni-374679
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (20.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-093313 -n no-preload-093313
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-093313 -n no-preload-093313: exit status 7 (108.800904ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-093313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (55.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-093313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1025 10:59:11.429167  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-093313 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (55.485454674s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-093313 -n no-preload-093313
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (55.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-374679 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-759329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-759329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m26.099989373s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xrszz" [609a0c23-fcd6-4966-b4dd-6411fdf189f7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005195261s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xrszz" [609a0c23-fcd6-4966-b4dd-6411fdf189f7] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00272333s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-093313 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-093313 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (79.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-759329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1025 11:00:33.350669  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:00:44.340438  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:00:47.426402  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/default-k8s-diff-port-223394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:00:47.432780  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/default-k8s-diff-port-223394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:00:47.444133  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/default-k8s-diff-port-223394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:00:47.465397  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/default-k8s-diff-port-223394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:00:47.506758  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/default-k8s-diff-port-223394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:00:47.588153  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/default-k8s-diff-port-223394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:00:47.749692  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/default-k8s-diff-port-223394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:00:48.070917  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/default-k8s-diff-port-223394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:00:48.712372  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/default-k8s-diff-port-223394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:00:49.994130  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/default-k8s-diff-port-223394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:00:52.555842  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/default-k8s-diff-port-223394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-759329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m19.634985702s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (79.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-759329 "pgrep -a kubelet"
I1025 11:00:55.080981  261256 config.go:182] Loaded profile config "auto-759329": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-759329 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wx5cd" [cae0d5f9-2d93-4628-816d-73c0794a3c24] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1025 11:00:57.677561  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/default-k8s-diff-port-223394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-wx5cd" [cae0d5f9-2d93-4628-816d-73c0794a3c24] Running
E1025 11:01:04.018480  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/functional-558907/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003302708s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-759329 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-759329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-759329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (82.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-759329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1025 11:01:28.400719  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/default-k8s-diff-port-223394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-759329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m22.93197054s)
--- PASS: TestNetworkPlugins/group/calico/Start (82.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-7j2x5" [727d7ada-af02-4a3d-add1-115aadb0302e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003934606s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-759329 "pgrep -a kubelet"
I1025 11:01:43.326098  261256 config.go:182] Loaded profile config "kindnet-759329": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-759329 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-psrq8" [320a7cee-a31f-40ca-83d3-f7174b6fd297] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-psrq8" [320a7cee-a31f-40ca-83d3-f7174b6fd297] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003617047s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-759329 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-759329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-759329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (65.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-759329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1025 11:02:49.489778  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/old-k8s-version-031983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-759329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m5.887697818s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (65.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-4p4md" [598381df-31bf-45d7-bd50-8bfd359ce53e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004592107s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-759329 "pgrep -a kubelet"
I1025 11:02:56.362652  261256 config.go:182] Loaded profile config "calico-759329": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-759329 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gnpls" [e1fdbca9-8866-47eb-abed-724f1b8f26ab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gnpls" [e1fdbca9-8866-47eb-abed-724f1b8f26ab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003955661s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-759329 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-759329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-759329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-759329 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-759329 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8lmmr" [0fc17411-75b4-4e51-9d9f-e13d3263b126] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1025 11:03:31.284430  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/default-k8s-diff-port-223394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-8lmmr" [0fc17411-75b4-4e51-9d9f-e13d3263b126] Running
E1025 11:03:33.769833  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:03:33.776431  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:03:33.787804  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:03:33.809190  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:03:33.850558  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:03:33.931982  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:03:34.093447  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:03:34.415124  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:03:35.056655  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:03:36.339501  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:03:38.902124  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004722906s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (87.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-759329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-759329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m27.386458328s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (87.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-759329 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-759329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-759329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (62.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-759329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1025 11:04:14.747617  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:04:21.255237  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/addons-184548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 11:04:55.709438  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/no-preload-093313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-759329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m2.427848231s)
--- PASS: TestNetworkPlugins/group/flannel/Start (62.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-759329 "pgrep -a kubelet"
I1025 11:04:59.060093  261256 config.go:182] Loaded profile config "enable-default-cni-759329": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-759329 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2btll" [59177be8-bcff-431e-b4c6-b10b50f7c9ce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2btll" [59177be8-bcff-431e-b4c6-b10b50f7c9ce] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.002719385s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-kwp2v" [f5e69242-dff4-4c44-91be-5872c90a83f4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003396317s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-759329 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-759329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-759329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-759329 "pgrep -a kubelet"
I1025 11:05:15.161230  261256 config.go:182] Loaded profile config "flannel-759329": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-759329 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b2vvt" [530e7910-7f8c-4534-b945-6351e93efabf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-b2vvt" [530e7910-7f8c-4534-b945-6351e93efabf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004386352s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-759329 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-759329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-759329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (81.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-759329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1025 11:05:47.426717  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/default-k8s-diff-port-223394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-759329 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m21.022429021s)
--- PASS: TestNetworkPlugins/group/bridge/Start (81.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-759329 "pgrep -a kubelet"
I1025 11:06:53.337834  261256 config.go:182] Loaded profile config "bridge-759329": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-759329 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-g9ksv" [8f8d363e-b325-4b76-87ee-ff2ca6c859cc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1025 11:06:57.448703  261256 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/kindnet-759329/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-g9ksv" [8f8d363e-b325-4b76-87ee-ff2ca6c859cc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003257097s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-759329 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-759329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-759329 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (30/326)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-540570 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-540570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-540570
--- SKIP: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-487220" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-487220
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-759329 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-759329

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-759329

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-759329

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-759329

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-759329

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-759329

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-759329

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-759329

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-759329

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-759329

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-759329

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-759329" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-759329" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 25 Oct 2025 10:45:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-291330
contexts:
- context:
cluster: kubernetes-upgrade-291330
user: kubernetes-upgrade-291330
name: kubernetes-upgrade-291330
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-291330
user:
client-certificate: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/kubernetes-upgrade-291330/client.crt
client-key: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/kubernetes-upgrade-291330/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-759329

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-759329"

                                                
                                                
----------------------- debugLogs end: kubenet-759329 [took: 3.92018997s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-759329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-759329
--- SKIP: TestNetworkPlugins/group/kubenet (4.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-759329 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-759329

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-759329

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-759329

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-759329

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-759329

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-759329

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-759329

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-759329

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-759329

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-759329

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-759329

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-759329" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-759329

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-759329

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-759329

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-759329

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-759329" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-759329" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21767-259409/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 25 Oct 2025 10:45:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-291330
contexts:
- context:
cluster: kubernetes-upgrade-291330
user: kubernetes-upgrade-291330
name: kubernetes-upgrade-291330
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-291330
user:
client-certificate: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/kubernetes-upgrade-291330/client.crt
client-key: /home/jenkins/minikube-integration/21767-259409/.minikube/profiles/kubernetes-upgrade-291330/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-759329

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-759329" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-759329"

                                                
                                                
----------------------- debugLogs end: cilium-759329 [took: 5.508735204s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-759329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-759329
--- SKIP: TestNetworkPlugins/group/cilium (5.70s)

                                                
                                    
Copied to clipboard